00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 115 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3293 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.211 > git --version # 'git version 2.39.2' 00:00:00.211 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.230 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.230 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.873 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.884 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.897 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.897 > git config core.sparsecheckout # timeout=10 00:00:06.908 > git read-tree -mu HEAD # timeout=10 00:00:06.924 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.973 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.974 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:07.065 [Pipeline] Start of Pipeline 00:00:07.079 [Pipeline] library 00:00:07.080 Loading library shm_lib@master 00:00:07.080 Library shm_lib@master is cached. Copying from home. 00:00:07.095 [Pipeline] node 00:00:07.101 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.105 [Pipeline] { 00:00:07.113 [Pipeline] catchError 00:00:07.114 [Pipeline] { 00:00:07.123 [Pipeline] wrap 00:00:07.129 [Pipeline] { 00:00:07.134 [Pipeline] stage 00:00:07.136 [Pipeline] { (Prologue) 00:00:07.311 [Pipeline] sh 00:00:07.588 + logger -p user.info -t JENKINS-CI 00:00:07.606 [Pipeline] echo 00:00:07.607 Node: GP6 00:00:07.615 [Pipeline] sh 00:00:07.915 [Pipeline] setCustomBuildProperty 00:00:07.933 [Pipeline] echo 00:00:07.939 Cleanup processes 00:00:07.957 [Pipeline] sh 00:00:08.257 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.257 547184 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.271 [Pipeline] sh 00:00:08.554 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.554 ++ grep -v 'sudo pgrep' 00:00:08.554 ++ awk '{print $1}' 00:00:08.554 + sudo kill -9 00:00:08.554 + true 00:00:08.570 [Pipeline] cleanWs 00:00:08.581 [WS-CLEANUP] Deleting project workspace... 00:00:08.581 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.594 [WS-CLEANUP] done 00:00:08.599 [Pipeline] setCustomBuildProperty 00:00:08.617 [Pipeline] sh 00:00:08.901 + sudo git config --global --replace-all safe.directory '*' 00:00:08.987 [Pipeline] httpRequest 00:00:09.015 [Pipeline] echo 00:00:09.017 Sorcerer 10.211.164.101 is alive 00:00:09.024 [Pipeline] httpRequest 00:00:09.029 HttpMethod: GET 00:00:09.029 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.030 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.048 Response Code: HTTP/1.1 200 OK 00:00:09.049 Success: Status code 200 is in the accepted range: 200,404 00:00:09.049 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:15.845 [Pipeline] sh 00:00:16.130 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:16.147 [Pipeline] httpRequest 00:00:16.167 [Pipeline] echo 00:00:16.169 Sorcerer 10.211.164.101 is alive 00:00:16.180 [Pipeline] httpRequest 00:00:16.185 HttpMethod: GET 00:00:16.186 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.187 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.211 Response Code: HTTP/1.1 200 OK 00:00:16.212 Success: Status code 200 is in the accepted range: 200,404 00:00:16.212 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:05.021 [Pipeline] sh 00:01:05.303 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:07.845 [Pipeline] sh 00:01:08.121 + git -C spdk log --oneline -n5 00:01:08.121 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:08.121 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:08.121 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:08.121 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:08.121 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:08.140 [Pipeline] withCredentials 00:01:08.150 > git --version # timeout=10 00:01:08.163 > git --version # 'git version 2.39.2' 00:01:08.178 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:08.180 [Pipeline] { 00:01:08.188 [Pipeline] retry 00:01:08.190 [Pipeline] { 00:01:08.207 [Pipeline] sh 00:01:08.487 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:08.499 [Pipeline] } 00:01:08.522 [Pipeline] // retry 00:01:08.528 [Pipeline] } 00:01:08.548 [Pipeline] // withCredentials 00:01:08.559 [Pipeline] httpRequest 00:01:08.581 [Pipeline] echo 00:01:08.583 Sorcerer 10.211.164.101 is alive 00:01:08.592 [Pipeline] httpRequest 00:01:08.596 HttpMethod: GET 00:01:08.597 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.597 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.607 Response Code: HTTP/1.1 200 OK 00:01:08.607 Success: Status code 200 is in the accepted range: 200,404 00:01:08.608 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.693 [Pipeline] sh 00:01:14.974 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.889 [Pipeline] sh 00:01:17.171 + git -C dpdk log --oneline -n5 00:01:17.171 caf0f5d395 version: 22.11.4 00:01:17.171 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:17.171 dc9c799c7d vhost: fix missing spinlock unlock 00:01:17.171 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:17.171 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:17.182 [Pipeline] } 00:01:17.200 [Pipeline] // stage 00:01:17.209 [Pipeline] stage 00:01:17.212 [Pipeline] { (Prepare) 00:01:17.233 [Pipeline] writeFile 00:01:17.254 [Pipeline] sh 00:01:17.536 + logger -p user.info -t JENKINS-CI 00:01:17.546 [Pipeline] sh 00:01:17.824 + logger -p user.info -t JENKINS-CI 00:01:17.838 [Pipeline] sh 00:01:18.120 + cat autorun-spdk.conf 00:01:18.120 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.120 SPDK_TEST_NVMF=1 00:01:18.120 SPDK_TEST_NVME_CLI=1 00:01:18.120 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.120 SPDK_TEST_NVMF_NICS=e810 00:01:18.120 SPDK_TEST_VFIOUSER=1 00:01:18.120 SPDK_RUN_UBSAN=1 00:01:18.120 NET_TYPE=phy 00:01:18.120 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.120 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.128 RUN_NIGHTLY=1 00:01:18.133 [Pipeline] readFile 00:01:18.161 [Pipeline] withEnv 00:01:18.163 [Pipeline] { 00:01:18.178 [Pipeline] sh 00:01:18.466 + set -ex 00:01:18.466 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:18.466 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.466 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.466 ++ SPDK_TEST_NVMF=1 00:01:18.466 ++ SPDK_TEST_NVME_CLI=1 00:01:18.466 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.466 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.466 ++ SPDK_TEST_VFIOUSER=1 00:01:18.466 ++ SPDK_RUN_UBSAN=1 00:01:18.466 ++ NET_TYPE=phy 00:01:18.466 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.466 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.466 ++ RUN_NIGHTLY=1 00:01:18.466 + case $SPDK_TEST_NVMF_NICS in 00:01:18.466 + DRIVERS=ice 00:01:18.466 + [[ tcp == \r\d\m\a ]] 00:01:18.466 + [[ -n ice ]] 00:01:18.466 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.466 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.466 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:18.466 rmmod: ERROR: Module irdma is not currently loaded 00:01:18.466 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.466 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.466 + true 00:01:18.466 + for D in $DRIVERS 00:01:18.466 + sudo modprobe ice 00:01:18.466 + exit 0 00:01:18.477 [Pipeline] } 00:01:18.495 [Pipeline] // withEnv 00:01:18.500 [Pipeline] } 00:01:18.517 [Pipeline] // stage 00:01:18.527 [Pipeline] catchError 00:01:18.529 [Pipeline] { 00:01:18.545 [Pipeline] timeout 00:01:18.545 Timeout set to expire in 50 min 00:01:18.547 [Pipeline] { 00:01:18.562 [Pipeline] stage 00:01:18.564 [Pipeline] { (Tests) 00:01:18.582 [Pipeline] sh 00:01:18.864 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.864 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.864 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.864 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:18.864 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.864 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.864 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:18.864 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.864 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.864 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.864 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:18.864 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.864 + source /etc/os-release 00:01:18.864 ++ NAME='Fedora Linux' 00:01:18.864 ++ VERSION='38 (Cloud Edition)' 00:01:18.864 ++ ID=fedora 00:01:18.864 ++ VERSION_ID=38 00:01:18.864 ++ VERSION_CODENAME= 00:01:18.864 ++ PLATFORM_ID=platform:f38 00:01:18.864 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.864 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.864 ++ LOGO=fedora-logo-icon 00:01:18.864 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.864 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.864 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.864 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.864 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.864 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.864 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.864 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.864 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.864 ++ SUPPORT_END=2024-05-14 00:01:18.864 ++ VARIANT='Cloud Edition' 00:01:18.865 ++ VARIANT_ID=cloud 00:01:18.865 + uname -a 00:01:18.865 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.865 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.802 Hugepages 00:01:19.802 node hugesize free / total 00:01:19.802 node0 1048576kB 0 / 0 00:01:19.802 node0 2048kB 0 / 0 00:01:19.802 node1 1048576kB 0 / 0 00:01:19.802 node1 2048kB 0 / 0 00:01:19.802 00:01:19.802 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.802 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:19.802 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:20.060 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:20.060 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:20.060 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:20.060 + rm -f /tmp/spdk-ld-path 00:01:20.060 + source autorun-spdk.conf 00:01:20.060 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.060 ++ SPDK_TEST_NVMF=1 00:01:20.060 ++ SPDK_TEST_NVME_CLI=1 00:01:20.060 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.060 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.060 ++ SPDK_TEST_VFIOUSER=1 00:01:20.060 ++ SPDK_RUN_UBSAN=1 00:01:20.060 ++ NET_TYPE=phy 00:01:20.060 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:20.060 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.060 ++ RUN_NIGHTLY=1 00:01:20.060 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.060 + [[ -n '' ]] 00:01:20.060 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.060 + for M in /var/spdk/build-*-manifest.txt 00:01:20.060 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.060 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.060 + for M in /var/spdk/build-*-manifest.txt 00:01:20.060 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.060 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.060 ++ uname 00:01:20.060 + [[ Linux == \L\i\n\u\x ]] 00:01:20.060 + sudo dmesg -T 00:01:20.060 + sudo dmesg --clear 00:01:20.060 + dmesg_pid=547903 00:01:20.060 + [[ Fedora Linux == FreeBSD ]] 00:01:20.060 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.060 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.060 + sudo dmesg -Tw 00:01:20.060 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.060 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:20.060 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:20.060 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.060 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.060 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.060 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.060 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.060 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.060 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.061 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.061 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.061 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.061 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.061 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.061 Test configuration: 00:01:20.061 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.061 SPDK_TEST_NVMF=1 00:01:20.061 SPDK_TEST_NVME_CLI=1 00:01:20.061 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.061 SPDK_TEST_NVMF_NICS=e810 00:01:20.061 SPDK_TEST_VFIOUSER=1 00:01:20.061 SPDK_RUN_UBSAN=1 00:01:20.061 NET_TYPE=phy 00:01:20.061 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:20.061 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.061 RUN_NIGHTLY=1 23:41:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:20.061 23:41:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.061 23:41:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.061 23:41:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.061 23:41:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.061 23:41:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.061 23:41:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.061 23:41:25 -- paths/export.sh@5 -- $ export PATH 00:01:20.061 23:41:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.061 23:41:25 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:20.061 23:41:25 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:20.061 23:41:25 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721857285.XXXXXX 00:01:20.061 23:41:25 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721857285.nEoEiL 00:01:20.061 23:41:25 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:20.061 23:41:25 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:20.061 23:41:25 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.061 23:41:25 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:20.061 23:41:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:20.061 23:41:25 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.061 23:41:25 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:20.061 23:41:25 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:20.061 23:41:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.061 23:41:25 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:20.061 23:41:25 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:20.061 23:41:25 -- pm/common@17 -- $ local monitor 00:01:20.061 23:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.061 23:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.061 23:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.061 23:41:25 -- pm/common@21 -- $ date +%s 00:01:20.061 23:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.061 23:41:25 -- pm/common@21 -- $ date +%s 00:01:20.061 23:41:25 -- pm/common@25 -- $ sleep 1 00:01:20.061 23:41:25 -- pm/common@21 -- $ date +%s 00:01:20.061 23:41:25 -- pm/common@21 -- $ date +%s 00:01:20.061 23:41:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857285 00:01:20.061 23:41:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857285 00:01:20.061 23:41:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857285 00:01:20.061 23:41:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857285 00:01:20.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857285_collect-vmstat.pm.log 00:01:20.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857285_collect-cpu-load.pm.log 00:01:20.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857285_collect-cpu-temp.pm.log 00:01:20.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857285_collect-bmc-pm.bmc.pm.log 00:01:21.438 23:41:26 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:21.438 23:41:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.438 23:41:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.438 23:41:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.438 23:41:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.438 Wed Jul 24 09:41:26 PM UTC 2024 00:01:21.438 23:41:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.438 v24.05-15-g241d0f3c9 00:01:21.438 23:41:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.438 23:41:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.438 23:41:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.438 23:41:26 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:21.438 23:41:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:21.438 23:41:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.438 ************************************ 00:01:21.438 START TEST ubsan 00:01:21.438 ************************************ 00:01:21.438 23:41:26 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:21.438 using ubsan 00:01:21.438 00:01:21.438 real 0m0.000s 00:01:21.438 user 0m0.000s 00:01:21.438 sys 0m0.000s 00:01:21.438 23:41:26 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:21.438 23:41:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.438 ************************************ 00:01:21.438 END TEST ubsan 00:01:21.438 ************************************ 00:01:21.438 23:41:26 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:21.438 23:41:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:21.438 23:41:26 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:21.438 23:41:26 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:21.438 23:41:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:21.438 23:41:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.438 ************************************ 00:01:21.438 START TEST build_native_dpdk 00:01:21.438 ************************************ 00:01:21.438 23:41:26 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:21.438 23:41:26 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:21.439 caf0f5d395 version: 22.11.4 00:01:21.439 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:21.439 dc9c799c7d vhost: fix missing spinlock unlock 00:01:21.439 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:21.439 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:21.439 patching file config/rte_config.h 00:01:21.439 Hunk #1 succeeded at 60 (offset 1 line). 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:21.439 23:41:26 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:21.439 patching file lib/pcapng/rte_pcapng.c 00:01:21.439 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:21.439 23:41:26 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:25.628 The Meson build system 00:01:25.628 Version: 1.3.1 00:01:25.628 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.628 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:25.628 Build type: native build 00:01:25.628 Program cat found: YES (/usr/bin/cat) 00:01:25.628 Project name: DPDK 00:01:25.628 Project version: 22.11.4 00:01:25.628 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:25.628 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:25.628 Host machine cpu family: x86_64 00:01:25.628 Host machine cpu: x86_64 00:01:25.628 Message: ## Building in Developer Mode ## 00:01:25.628 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:25.628 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:25.628 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:25.628 Program objdump found: YES (/usr/bin/objdump) 00:01:25.628 Program python3 found: YES (/usr/bin/python3) 00:01:25.628 Program cat found: YES (/usr/bin/cat) 00:01:25.628 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:25.628 Checking for size of "void *" : 8 00:01:25.628 Checking for size of "void *" : 8 (cached) 00:01:25.628 Library m found: YES 00:01:25.628 Library numa found: YES 00:01:25.628 Has header "numaif.h" : YES 00:01:25.628 Library fdt found: NO 00:01:25.628 Library execinfo found: NO 00:01:25.628 Has header "execinfo.h" : YES 00:01:25.628 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:25.628 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:25.628 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:25.628 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:25.628 Run-time dependency openssl found: YES 3.0.9 00:01:25.628 Run-time dependency libpcap found: YES 1.10.4 00:01:25.628 Has header "pcap.h" with dependency libpcap: YES 00:01:25.628 Compiler for C supports arguments -Wcast-qual: YES 00:01:25.628 Compiler for C supports arguments -Wdeprecated: YES 00:01:25.628 Compiler for C supports arguments -Wformat: YES 00:01:25.628 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:25.628 Compiler for C supports arguments -Wformat-security: NO 00:01:25.628 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.628 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:25.628 Compiler for C supports arguments -Wnested-externs: YES 00:01:25.628 Compiler for C supports arguments -Wold-style-definition: YES 00:01:25.628 Compiler for C supports arguments -Wpointer-arith: YES 00:01:25.628 Compiler for C supports arguments -Wsign-compare: YES 00:01:25.628 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:25.628 Compiler for C supports arguments -Wundef: YES 00:01:25.628 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.628 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:25.628 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:25.628 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.628 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:25.628 Compiler for C supports arguments -mavx512f: YES 00:01:25.628 Checking if "AVX512 checking" compiles: YES 00:01:25.628 Fetching value of define "__SSE4_2__" : 1 00:01:25.628 Fetching value of define "__AES__" : 1 00:01:25.628 Fetching value of define "__AVX__" : 1 00:01:25.628 Fetching value of define "__AVX2__" : (undefined) 00:01:25.628 Fetching value of define "__AVX512BW__" : (undefined) 00:01:25.628 Fetching value of define "__AVX512CD__" : (undefined) 00:01:25.628 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:25.628 Fetching value of define "__AVX512F__" : (undefined) 00:01:25.628 Fetching value of define "__AVX512VL__" : (undefined) 00:01:25.628 Fetching value of define "__PCLMUL__" : 1 00:01:25.628 Fetching value of define "__RDRND__" : 1 00:01:25.628 Fetching value of define "__RDSEED__" : (undefined) 00:01:25.628 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:25.628 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:25.628 Message: lib/kvargs: Defining dependency "kvargs" 00:01:25.628 Message: lib/telemetry: Defining dependency "telemetry" 00:01:25.628 Checking for function "getentropy" : YES 00:01:25.628 Message: lib/eal: Defining dependency "eal" 00:01:25.628 Message: lib/ring: Defining dependency "ring" 00:01:25.628 Message: lib/rcu: Defining dependency "rcu" 00:01:25.628 Message: lib/mempool: Defining dependency "mempool" 00:01:25.628 Message: lib/mbuf: Defining dependency "mbuf" 00:01:25.628 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:25.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:25.628 Compiler for C supports arguments -mpclmul: YES 00:01:25.628 Compiler for C supports arguments -maes: YES 00:01:25.628 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:25.628 Compiler for C supports arguments -mavx512bw: YES 00:01:25.628 Compiler for C supports arguments -mavx512dq: YES 00:01:25.628 Compiler for C supports arguments -mavx512vl: YES 00:01:25.628 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:25.628 Compiler for C supports arguments -mavx2: YES 00:01:25.628 Compiler for C supports arguments -mavx: YES 00:01:25.628 Message: lib/net: Defining dependency "net" 00:01:25.628 Message: lib/meter: Defining dependency "meter" 00:01:25.628 Message: lib/ethdev: Defining dependency "ethdev" 00:01:25.628 Message: lib/pci: Defining dependency "pci" 00:01:25.628 Message: lib/cmdline: Defining dependency "cmdline" 00:01:25.628 Message: lib/metrics: Defining dependency "metrics" 00:01:25.628 Message: lib/hash: Defining dependency "hash" 00:01:25.628 Message: lib/timer: Defining dependency "timer" 00:01:25.628 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:25.628 Compiler for C supports arguments -mavx2: YES (cached) 00:01:25.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:25.628 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:25.628 Message: lib/acl: Defining dependency "acl" 00:01:25.628 Message: lib/bbdev: Defining dependency "bbdev" 00:01:25.628 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:25.628 Run-time dependency libelf found: YES 0.190 00:01:25.628 Message: lib/bpf: Defining dependency "bpf" 00:01:25.628 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:25.628 Message: lib/compressdev: Defining dependency "compressdev" 00:01:25.628 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:25.628 Message: lib/distributor: Defining dependency "distributor" 00:01:25.628 Message: lib/efd: Defining dependency "efd" 00:01:25.628 Message: lib/eventdev: Defining dependency "eventdev" 00:01:25.628 Message: lib/gpudev: Defining dependency "gpudev" 00:01:25.628 Message: lib/gro: Defining dependency "gro" 00:01:25.628 Message: lib/gso: Defining dependency "gso" 00:01:25.628 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:25.628 Message: lib/jobstats: Defining dependency "jobstats" 00:01:25.628 Message: lib/latencystats: Defining dependency "latencystats" 00:01:25.628 Message: lib/lpm: Defining dependency "lpm" 00:01:25.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:25.628 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:25.628 Message: lib/member: Defining dependency "member" 00:01:25.628 Message: lib/pcapng: Defining dependency "pcapng" 00:01:25.628 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:25.628 Message: lib/power: Defining dependency "power" 00:01:25.628 Message: lib/rawdev: Defining dependency "rawdev" 00:01:25.628 Message: lib/regexdev: Defining dependency "regexdev" 00:01:25.628 Message: lib/dmadev: Defining dependency "dmadev" 00:01:25.628 Message: lib/rib: Defining dependency "rib" 00:01:25.628 Message: lib/reorder: Defining dependency "reorder" 00:01:25.628 Message: lib/sched: Defining dependency "sched" 00:01:25.628 Message: lib/security: Defining dependency "security" 00:01:25.628 Message: lib/stack: Defining dependency "stack" 00:01:25.628 Has header "linux/userfaultfd.h" : YES 00:01:25.628 Message: lib/vhost: Defining dependency "vhost" 00:01:25.628 Message: lib/ipsec: Defining dependency "ipsec" 00:01:25.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:25.628 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:25.628 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:25.628 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:25.628 Message: lib/fib: Defining dependency "fib" 00:01:25.628 Message: lib/port: Defining dependency "port" 00:01:25.628 Message: lib/pdump: Defining dependency "pdump" 00:01:25.628 Message: lib/table: Defining dependency "table" 00:01:25.628 Message: lib/pipeline: Defining dependency "pipeline" 00:01:25.628 Message: lib/graph: Defining dependency "graph" 00:01:25.628 Message: lib/node: Defining dependency "node" 00:01:25.628 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:25.628 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:25.628 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:25.628 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:25.628 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:25.628 Compiler for C supports arguments -Wno-unused-value: YES 00:01:26.564 Compiler for C supports arguments -Wno-format: YES 00:01:26.564 Compiler for C supports arguments -Wno-format-security: YES 00:01:26.564 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:26.564 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:26.564 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:26.564 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:26.564 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:26.564 Compiler for C supports arguments -mavx2: YES (cached) 00:01:26.564 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:26.564 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.564 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:26.564 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:26.564 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:26.564 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.564 Configuring doxy-api.conf using configuration 00:01:26.564 Program sphinx-build found: NO 00:01:26.564 Configuring rte_build_config.h using configuration 00:01:26.564 Message: 00:01:26.564 ================= 00:01:26.564 Applications Enabled 00:01:26.564 ================= 00:01:26.564 00:01:26.564 apps: 00:01:26.564 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:26.564 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:26.564 test-security-perf, 00:01:26.564 00:01:26.564 Message: 00:01:26.564 ================= 00:01:26.564 Libraries Enabled 00:01:26.564 ================= 00:01:26.564 00:01:26.564 libs: 00:01:26.564 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:26.564 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:26.564 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:26.564 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:26.564 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:26.564 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:26.564 table, pipeline, graph, node, 00:01:26.564 00:01:26.564 Message: 00:01:26.564 =============== 00:01:26.564 Drivers Enabled 00:01:26.564 =============== 00:01:26.564 00:01:26.564 common: 00:01:26.564 00:01:26.564 bus: 00:01:26.564 pci, vdev, 00:01:26.564 mempool: 00:01:26.564 ring, 00:01:26.564 dma: 00:01:26.564 00:01:26.564 net: 00:01:26.564 i40e, 00:01:26.564 raw: 00:01:26.564 00:01:26.564 crypto: 00:01:26.564 00:01:26.564 compress: 00:01:26.564 00:01:26.564 regex: 00:01:26.564 00:01:26.564 vdpa: 00:01:26.564 00:01:26.564 event: 00:01:26.564 00:01:26.564 baseband: 00:01:26.564 00:01:26.564 gpu: 00:01:26.564 00:01:26.564 00:01:26.564 Message: 00:01:26.564 ================= 00:01:26.564 Content Skipped 00:01:26.564 ================= 00:01:26.564 00:01:26.564 apps: 00:01:26.564 00:01:26.564 libs: 00:01:26.564 kni: explicitly disabled via build config (deprecated lib) 00:01:26.564 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:26.564 00:01:26.564 drivers: 00:01:26.564 common/cpt: not in enabled drivers build config 00:01:26.564 common/dpaax: not in enabled drivers build config 00:01:26.564 common/iavf: not in enabled drivers build config 00:01:26.564 common/idpf: not in enabled drivers build config 00:01:26.564 common/mvep: not in enabled drivers build config 00:01:26.564 common/octeontx: not in enabled drivers build config 00:01:26.564 bus/auxiliary: not in enabled drivers build config 00:01:26.564 bus/dpaa: not in enabled drivers build config 00:01:26.564 bus/fslmc: not in enabled drivers build config 00:01:26.564 bus/ifpga: not in enabled drivers build config 00:01:26.564 bus/vmbus: not in enabled drivers build config 00:01:26.564 common/cnxk: not in enabled drivers build config 00:01:26.564 common/mlx5: not in enabled drivers build config 00:01:26.564 common/qat: not in enabled drivers build config 00:01:26.564 common/sfc_efx: not in enabled drivers build config 00:01:26.564 mempool/bucket: not in enabled drivers build config 00:01:26.564 mempool/cnxk: not in enabled drivers build config 00:01:26.564 mempool/dpaa: not in enabled drivers build config 00:01:26.564 mempool/dpaa2: not in enabled drivers build config 00:01:26.564 mempool/octeontx: not in enabled drivers build config 00:01:26.564 mempool/stack: not in enabled drivers build config 00:01:26.564 dma/cnxk: not in enabled drivers build config 00:01:26.564 dma/dpaa: not in enabled drivers build config 00:01:26.564 dma/dpaa2: not in enabled drivers build config 00:01:26.564 dma/hisilicon: not in enabled drivers build config 00:01:26.564 dma/idxd: not in enabled drivers build config 00:01:26.564 dma/ioat: not in enabled drivers build config 00:01:26.564 dma/skeleton: not in enabled drivers build config 00:01:26.564 net/af_packet: not in enabled drivers build config 00:01:26.564 net/af_xdp: not in enabled drivers build config 00:01:26.564 net/ark: not in enabled drivers build config 00:01:26.564 net/atlantic: not in enabled drivers build config 00:01:26.564 net/avp: not in enabled drivers build config 00:01:26.564 net/axgbe: not in enabled drivers build config 00:01:26.564 net/bnx2x: not in enabled drivers build config 00:01:26.564 net/bnxt: not in enabled drivers build config 00:01:26.564 net/bonding: not in enabled drivers build config 00:01:26.564 net/cnxk: not in enabled drivers build config 00:01:26.564 net/cxgbe: not in enabled drivers build config 00:01:26.564 net/dpaa: not in enabled drivers build config 00:01:26.564 net/dpaa2: not in enabled drivers build config 00:01:26.564 net/e1000: not in enabled drivers build config 00:01:26.564 net/ena: not in enabled drivers build config 00:01:26.564 net/enetc: not in enabled drivers build config 00:01:26.564 net/enetfec: not in enabled drivers build config 00:01:26.564 net/enic: not in enabled drivers build config 00:01:26.564 net/failsafe: not in enabled drivers build config 00:01:26.564 net/fm10k: not in enabled drivers build config 00:01:26.564 net/gve: not in enabled drivers build config 00:01:26.564 net/hinic: not in enabled drivers build config 00:01:26.564 net/hns3: not in enabled drivers build config 00:01:26.564 net/iavf: not in enabled drivers build config 00:01:26.564 net/ice: not in enabled drivers build config 00:01:26.564 net/idpf: not in enabled drivers build config 00:01:26.564 net/igc: not in enabled drivers build config 00:01:26.564 net/ionic: not in enabled drivers build config 00:01:26.565 net/ipn3ke: not in enabled drivers build config 00:01:26.565 net/ixgbe: not in enabled drivers build config 00:01:26.565 net/kni: not in enabled drivers build config 00:01:26.565 net/liquidio: not in enabled drivers build config 00:01:26.565 net/mana: not in enabled drivers build config 00:01:26.565 net/memif: not in enabled drivers build config 00:01:26.565 net/mlx4: not in enabled drivers build config 00:01:26.565 net/mlx5: not in enabled drivers build config 00:01:26.565 net/mvneta: not in enabled drivers build config 00:01:26.565 net/mvpp2: not in enabled drivers build config 00:01:26.565 net/netvsc: not in enabled drivers build config 00:01:26.565 net/nfb: not in enabled drivers build config 00:01:26.565 net/nfp: not in enabled drivers build config 00:01:26.565 net/ngbe: not in enabled drivers build config 00:01:26.565 net/null: not in enabled drivers build config 00:01:26.565 net/octeontx: not in enabled drivers build config 00:01:26.565 net/octeon_ep: not in enabled drivers build config 00:01:26.565 net/pcap: not in enabled drivers build config 00:01:26.565 net/pfe: not in enabled drivers build config 00:01:26.565 net/qede: not in enabled drivers build config 00:01:26.565 net/ring: not in enabled drivers build config 00:01:26.565 net/sfc: not in enabled drivers build config 00:01:26.565 net/softnic: not in enabled drivers build config 00:01:26.565 net/tap: not in enabled drivers build config 00:01:26.565 net/thunderx: not in enabled drivers build config 00:01:26.565 net/txgbe: not in enabled drivers build config 00:01:26.565 net/vdev_netvsc: not in enabled drivers build config 00:01:26.565 net/vhost: not in enabled drivers build config 00:01:26.565 net/virtio: not in enabled drivers build config 00:01:26.565 net/vmxnet3: not in enabled drivers build config 00:01:26.565 raw/cnxk_bphy: not in enabled drivers build config 00:01:26.565 raw/cnxk_gpio: not in enabled drivers build config 00:01:26.565 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:26.565 raw/ifpga: not in enabled drivers build config 00:01:26.565 raw/ntb: not in enabled drivers build config 00:01:26.565 raw/skeleton: not in enabled drivers build config 00:01:26.565 crypto/armv8: not in enabled drivers build config 00:01:26.565 crypto/bcmfs: not in enabled drivers build config 00:01:26.565 crypto/caam_jr: not in enabled drivers build config 00:01:26.565 crypto/ccp: not in enabled drivers build config 00:01:26.565 crypto/cnxk: not in enabled drivers build config 00:01:26.565 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.565 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.565 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.565 crypto/mlx5: not in enabled drivers build config 00:01:26.565 crypto/mvsam: not in enabled drivers build config 00:01:26.565 crypto/nitrox: not in enabled drivers build config 00:01:26.565 crypto/null: not in enabled drivers build config 00:01:26.565 crypto/octeontx: not in enabled drivers build config 00:01:26.565 crypto/openssl: not in enabled drivers build config 00:01:26.565 crypto/scheduler: not in enabled drivers build config 00:01:26.565 crypto/uadk: not in enabled drivers build config 00:01:26.565 crypto/virtio: not in enabled drivers build config 00:01:26.565 compress/isal: not in enabled drivers build config 00:01:26.565 compress/mlx5: not in enabled drivers build config 00:01:26.565 compress/octeontx: not in enabled drivers build config 00:01:26.565 compress/zlib: not in enabled drivers build config 00:01:26.565 regex/mlx5: not in enabled drivers build config 00:01:26.565 regex/cn9k: not in enabled drivers build config 00:01:26.565 vdpa/ifc: not in enabled drivers build config 00:01:26.565 vdpa/mlx5: not in enabled drivers build config 00:01:26.565 vdpa/sfc: not in enabled drivers build config 00:01:26.565 event/cnxk: not in enabled drivers build config 00:01:26.565 event/dlb2: not in enabled drivers build config 00:01:26.565 event/dpaa: not in enabled drivers build config 00:01:26.565 event/dpaa2: not in enabled drivers build config 00:01:26.565 event/dsw: not in enabled drivers build config 00:01:26.565 event/opdl: not in enabled drivers build config 00:01:26.565 event/skeleton: not in enabled drivers build config 00:01:26.565 event/sw: not in enabled drivers build config 00:01:26.565 event/octeontx: not in enabled drivers build config 00:01:26.565 baseband/acc: not in enabled drivers build config 00:01:26.565 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:26.565 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:26.565 baseband/la12xx: not in enabled drivers build config 00:01:26.565 baseband/null: not in enabled drivers build config 00:01:26.565 baseband/turbo_sw: not in enabled drivers build config 00:01:26.565 gpu/cuda: not in enabled drivers build config 00:01:26.565 00:01:26.565 00:01:26.565 Build targets in project: 316 00:01:26.565 00:01:26.565 DPDK 22.11.4 00:01:26.565 00:01:26.565 User defined options 00:01:26.565 libdir : lib 00:01:26.565 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.565 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:26.565 c_link_args : 00:01:26.565 enable_docs : false 00:01:26.565 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:26.565 enable_kmods : false 00:01:26.565 machine : native 00:01:26.565 tests : false 00:01:26.565 00:01:26.565 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.565 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:26.565 23:41:32 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:26.565 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:26.828 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:26.828 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:26.828 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:26.828 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:26.828 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.828 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.828 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.828 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.828 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:26.828 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.828 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.828 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.828 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.828 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.828 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:26.828 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.828 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.828 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:26.828 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:26.828 [20/745] Linking static target lib/librte_kvargs.a 00:01:26.828 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.828 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:26.828 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:27.090 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:27.090 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:27.090 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:27.090 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:27.090 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:27.090 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:27.090 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:27.090 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:27.090 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:27.090 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:27.090 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:27.090 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:27.090 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:27.090 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:27.090 [38/745] Generating lib/rte_eal_def with a custom command 00:01:27.090 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:27.090 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:27.090 [41/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:27.090 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:27.090 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:27.090 [44/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:27.090 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:27.090 [46/745] Generating lib/rte_ring_def with a custom command 00:01:27.090 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:27.090 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:27.090 [49/745] Generating lib/rte_ring_mingw with a custom command 00:01:27.090 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:27.090 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:27.090 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.090 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:27.090 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:27.090 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:27.090 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:27.090 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:27.091 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:27.091 [59/745] Generating lib/rte_mempool_def with a custom command 00:01:27.091 [60/745] Generating lib/rte_mbuf_def with a custom command 00:01:27.091 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:27.091 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.091 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:27.091 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.091 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:27.091 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:27.091 [67/745] Generating lib/rte_net_def with a custom command 00:01:27.091 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:27.091 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.091 [70/745] Generating lib/rte_meter_def with a custom command 00:01:27.091 [71/745] Generating lib/rte_meter_mingw with a custom command 00:01:27.091 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:27.091 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:27.354 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:27.354 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:27.354 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:27.354 [77/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.354 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:27.354 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.354 [80/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:27.354 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.354 [82/745] Linking target lib/librte_kvargs.so.23.0 00:01:27.354 [83/745] Linking static target lib/librte_ring.a 00:01:27.354 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:27.354 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.354 [86/745] Generating lib/rte_pci_def with a custom command 00:01:27.354 [87/745] Linking static target lib/librte_meter.a 00:01:27.354 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.354 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.614 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:27.614 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.614 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:27.614 [93/745] Linking static target lib/librte_pci.a 00:01:27.614 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:27.614 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.614 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.614 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.614 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:27.870 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.870 [100/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.870 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:27.870 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:27.870 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.870 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:27.870 [105/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.870 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:27.870 [107/745] Generating lib/rte_cmdline_def with a custom command 00:01:27.870 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:27.870 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:27.870 [110/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.870 [111/745] Linking static target lib/librte_telemetry.a 00:01:27.870 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:27.870 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:27.870 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:27.870 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:27.870 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:27.870 [117/745] Generating lib/rte_hash_def with a custom command 00:01:27.870 [118/745] Generating lib/rte_hash_mingw with a custom command 00:01:27.870 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:27.870 [120/745] Generating lib/rte_timer_def with a custom command 00:01:28.130 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:28.130 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:28.130 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:28.130 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:28.130 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:28.388 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:28.388 [127/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:28.388 [128/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:28.388 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:28.388 [130/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:28.388 [131/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:28.388 [132/745] Generating lib/rte_acl_def with a custom command 00:01:28.388 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:28.388 [134/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:28.388 [135/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:28.388 [136/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:28.388 [137/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:28.388 [138/745] Generating lib/rte_bbdev_def with a custom command 00:01:28.388 [139/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:28.388 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:28.388 [141/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:28.388 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:28.388 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:28.388 [144/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.653 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:28.653 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:28.653 [147/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:28.653 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:28.653 [149/745] Linking target lib/librte_telemetry.so.23.0 00:01:28.653 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:28.653 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:28.653 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:28.653 [153/745] Generating lib/rte_cfgfile_def with a custom command 00:01:28.653 [154/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:28.653 [155/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:28.653 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:28.653 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:28.653 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:28.653 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:28.653 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:28.653 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:28.913 [162/745] Generating lib/rte_cryptodev_def with a custom command 00:01:28.913 [163/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:28.913 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:28.913 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:28.913 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:28.913 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:28.913 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:28.913 [169/745] Linking static target lib/librte_rcu.a 00:01:28.913 [170/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:28.913 [171/745] Linking static target lib/librte_timer.a 00:01:28.913 [172/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:28.913 [173/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:28.913 [174/745] Linking static target lib/librte_cmdline.a 00:01:28.913 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:28.913 [176/745] Generating lib/rte_distributor_def with a custom command 00:01:28.913 [177/745] Linking static target lib/librte_net.a 00:01:28.913 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:28.913 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:28.913 [180/745] Generating lib/rte_efd_def with a custom command 00:01:28.913 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:29.173 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.173 [183/745] Linking static target lib/librte_mempool.a 00:01:29.173 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:29.173 [185/745] Linking static target lib/librte_cfgfile.a 00:01:29.173 [186/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:29.173 [187/745] Linking static target lib/librte_metrics.a 00:01:29.173 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.436 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.436 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:29.436 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.436 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:29.436 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:29.436 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:29.436 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:29.436 [196/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:29.436 [197/745] Linking static target lib/librte_eal.a 00:01:29.698 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:29.698 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:29.698 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:29.698 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:29.698 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:29.698 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:29.698 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:29.698 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.698 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:29.698 [207/745] Linking static target lib/librte_bitratestats.a 00:01:29.698 [208/745] Generating lib/rte_gro_def with a custom command 00:01:29.698 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:29.698 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.698 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:29.962 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:29.962 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.962 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.962 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.962 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:29.962 [217/745] Generating lib/rte_gso_def with a custom command 00:01:29.962 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:29.962 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:30.223 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:30.223 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:30.223 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:30.223 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:30.223 [224/745] Linking static target lib/librte_bbdev.a 00:01:30.223 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:01:30.223 [226/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.223 [227/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:30.223 [228/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.223 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:30.223 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:30.490 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:30.490 [232/745] Generating lib/rte_jobstats_def with a custom command 00:01:30.490 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:30.490 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:30.490 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:30.490 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:30.490 [237/745] Linking static target lib/librte_compressdev.a 00:01:30.490 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:01:30.490 [239/745] Generating lib/rte_lpm_def with a custom command 00:01:30.490 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:30.490 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:30.490 [242/745] Linking static target lib/librte_jobstats.a 00:01:30.758 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:30.758 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:30.758 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:31.018 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:31.018 [247/745] Linking static target lib/librte_distributor.a 00:01:31.018 [248/745] Generating lib/rte_member_def with a custom command 00:01:31.018 [249/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:31.018 [250/745] Generating lib/rte_member_mingw with a custom command 00:01:31.018 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:31.018 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:31.018 [253/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.018 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:31.018 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:31.281 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:31.281 [257/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:31.281 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:31.281 [259/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.281 [260/745] Linking static target lib/librte_bpf.a 00:01:31.281 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:31.281 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.281 [263/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.281 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:31.281 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:31.281 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:31.281 [267/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:31.281 [268/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.281 [269/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.281 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:31.281 [271/745] Generating lib/rte_power_def with a custom command 00:01:31.281 [272/745] Linking static target lib/librte_gpudev.a 00:01:31.281 [273/745] Linking static target lib/librte_gro.a 00:01:31.281 [274/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:31.281 [275/745] Generating lib/rte_power_mingw with a custom command 00:01:31.281 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:31.281 [277/745] Generating lib/rte_rawdev_def with a custom command 00:01:31.546 [278/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.546 [279/745] Generating lib/rte_regexdev_def with a custom command 00:01:31.546 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:31.546 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:31.546 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:31.546 [283/745] Generating lib/rte_rib_def with a custom command 00:01:31.546 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:31.546 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:31.546 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:31.546 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:31.546 [288/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.810 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:31.810 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.810 [291/745] Generating lib/rte_sched_def with a custom command 00:01:31.810 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:31.810 [293/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.810 [294/745] Generating lib/rte_sched_mingw with a custom command 00:01:31.810 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:31.810 [296/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:31.810 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:31.810 [298/745] Generating lib/rte_security_def with a custom command 00:01:31.810 [299/745] Generating lib/rte_security_mingw with a custom command 00:01:31.810 [300/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:31.810 [301/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:31.810 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:31.810 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:31.810 [304/745] Linking static target lib/librte_latencystats.a 00:01:31.810 [305/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:31.810 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:31.810 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:31.811 [308/745] Generating lib/rte_stack_def with a custom command 00:01:32.071 [309/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:32.071 [310/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:32.071 [311/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:32.071 [312/745] Linking static target lib/librte_rawdev.a 00:01:32.071 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:32.071 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:32.071 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:32.071 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:32.071 [317/745] Linking static target lib/librte_stack.a 00:01:32.071 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:32.071 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:32.071 [320/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:32.071 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.071 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.071 [323/745] Linking static target lib/librte_dmadev.a 00:01:32.334 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:32.334 [325/745] Linking static target lib/librte_ip_frag.a 00:01:32.334 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.334 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:32.334 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:32.334 [329/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:32.334 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.334 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:32.600 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:32.600 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:32.600 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.600 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.600 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.862 [337/745] Generating lib/rte_fib_def with a custom command 00:01:32.862 [338/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:32.862 [339/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:32.862 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:32.862 [341/745] Linking static target lib/librte_gso.a 00:01:32.862 [342/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.862 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.862 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:32.862 [345/745] Linking static target lib/librte_regexdev.a 00:01:33.126 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.126 [347/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.126 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:33.126 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:33.126 [350/745] Linking static target lib/librte_efd.a 00:01:33.389 [351/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:33.389 [352/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:33.389 [353/745] Linking static target lib/librte_lpm.a 00:01:33.389 [354/745] Linking static target lib/librte_pcapng.a 00:01:33.389 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:33.389 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:33.389 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.389 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.389 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.389 [360/745] Linking static target lib/librte_reorder.a 00:01:33.389 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:33.655 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.655 [363/745] Generating lib/rte_port_def with a custom command 00:01:33.655 [364/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:33.655 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:33.655 [366/745] Generating lib/rte_port_mingw with a custom command 00:01:33.655 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:33.655 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:33.655 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:33.655 [370/745] Linking static target lib/acl/libavx2_tmp.a 00:01:33.655 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:33.655 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:01:33.655 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:33.655 [374/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:33.655 [375/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:33.915 [376/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.915 [377/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.915 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.915 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.915 [380/745] Linking static target lib/librte_security.a 00:01:33.915 [381/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:33.915 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.915 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:33.915 [384/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.915 [385/745] Linking static target lib/librte_hash.a 00:01:33.915 [386/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.915 [387/745] Linking static target lib/librte_power.a 00:01:33.915 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.181 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:34.181 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:34.181 [391/745] Linking static target lib/librte_rib.a 00:01:34.181 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:34.442 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:34.442 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:34.442 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:01:34.442 [396/745] Linking static target lib/librte_acl.a 00:01:34.442 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:34.442 [398/745] Generating lib/rte_table_def with a custom command 00:01:34.442 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:34.442 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.703 [401/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:34.995 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:34.995 [403/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.995 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:34.995 [405/745] Linking static target lib/librte_ethdev.a 00:01:34.995 [406/745] Linking static target lib/librte_mbuf.a 00:01:34.995 [407/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.995 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:34.995 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:34.995 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:34.995 [411/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.995 [412/745] Generating lib/rte_pipeline_def with a custom command 00:01:34.995 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:34.995 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:34.995 [415/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:34.995 [416/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.995 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:35.272 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:35.272 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:35.272 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:35.272 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:35.272 [422/745] Generating lib/rte_graph_def with a custom command 00:01:35.272 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:35.272 [424/745] Linking static target lib/librte_fib.a 00:01:35.272 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:35.272 [426/745] Linking static target lib/librte_member.a 00:01:35.536 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:35.536 [428/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:35.536 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:35.536 [430/745] Linking static target lib/librte_eventdev.a 00:01:35.536 [431/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:35.536 [432/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.536 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:35.536 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:35.536 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:35.536 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:35.536 [437/745] Generating lib/rte_node_def with a custom command 00:01:35.536 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:35.801 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:35.801 [440/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.801 [441/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.801 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:35.801 [443/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.801 [444/745] Linking static target lib/librte_sched.a 00:01:35.801 [445/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.801 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:35.801 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:35.801 [448/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:35.801 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:36.060 [450/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.060 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:36.060 [452/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.060 [453/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.060 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:36.060 [455/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.060 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:36.060 [457/745] Linking static target lib/librte_cryptodev.a 00:01:36.060 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:36.060 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:36.060 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:36.060 [461/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:36.329 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.329 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:36.329 [464/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:36.329 [465/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:36.329 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.329 [467/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:36.329 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:36.329 [469/745] Linking static target lib/librte_pdump.a 00:01:36.329 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.329 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.329 [472/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:36.329 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:36.589 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:36.589 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.589 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.589 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:36.589 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:36.589 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:36.589 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:36.589 [481/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.855 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:36.855 [483/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.855 [484/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:36.855 [485/745] Linking static target drivers/librte_bus_vdev.a 00:01:36.855 [486/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.856 [487/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.856 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:36.856 [489/745] Linking static target lib/librte_table.a 00:01:37.119 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:37.119 [491/745] Linking static target lib/librte_ipsec.a 00:01:37.119 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:37.119 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.119 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.119 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.379 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:37.379 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:37.379 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:37.379 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:37.379 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:37.379 [501/745] Linking static target lib/librte_graph.a 00:01:37.379 [502/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:37.379 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:37.379 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:37.643 [505/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.643 [506/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:37.643 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.643 [508/745] Linking static target drivers/librte_bus_pci.a 00:01:37.643 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.643 [510/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.643 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:37.643 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:37.905 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:37.905 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.164 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:38.164 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.164 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:38.429 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:38.429 [519/745] Linking static target lib/librte_port.a 00:01:38.429 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:38.429 [521/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.429 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:38.695 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.695 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.695 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:38.695 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:38.695 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.955 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:38.955 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.955 [530/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:38.955 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.955 [532/745] Linking static target drivers/librte_mempool_ring.a 00:01:38.955 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.955 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:39.217 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:39.217 [536/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:39.217 [537/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:39.217 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.217 [539/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.481 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:39.481 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:39.745 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:39.745 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:39.745 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:39.745 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:40.003 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:40.003 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:40.003 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:40.266 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:40.266 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:40.266 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:40.266 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:40.266 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:40.839 [554/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:40.839 [555/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:40.840 [556/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:40.840 [557/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:40.840 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:41.104 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:41.104 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:41.104 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:41.371 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.371 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:41.371 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:41.371 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:41.371 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:41.636 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:41.636 [568/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:41.636 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:41.636 [570/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:41.636 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:41.636 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:41.636 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:41.896 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:41.896 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:42.160 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:42.160 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:42.160 [578/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.160 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:42.160 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:42.160 [581/745] Linking target lib/librte_eal.so.23.0 00:01:42.160 [582/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:42.160 [583/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:42.160 [584/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:42.424 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:42.424 [586/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:42.424 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:42.424 [588/745] Linking target lib/librte_ring.so.23.0 00:01:42.690 [589/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:42.690 [590/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:42.690 [591/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:42.690 [592/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.690 [593/745] Linking target lib/librte_meter.so.23.0 00:01:42.952 [594/745] Linking target lib/librte_pci.so.23.0 00:01:42.952 [595/745] Linking target lib/librte_rcu.so.23.0 00:01:42.952 [596/745] Linking target lib/librte_mempool.so.23.0 00:01:42.952 [597/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:42.952 [598/745] Linking target lib/librte_timer.so.23.0 00:01:42.952 [599/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:42.952 [600/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:42.952 [601/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:43.214 [602/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:43.214 [603/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:43.214 [604/745] Linking target lib/librte_acl.so.23.0 00:01:43.214 [605/745] Linking target lib/librte_cfgfile.so.23.0 00:01:43.214 [606/745] Linking target lib/librte_jobstats.so.23.0 00:01:43.214 [607/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:43.214 [608/745] Linking target lib/librte_rawdev.so.23.0 00:01:43.214 [609/745] Linking target lib/librte_mbuf.so.23.0 00:01:43.214 [610/745] Linking target lib/librte_rib.so.23.0 00:01:43.214 [611/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:43.214 [612/745] Linking target lib/librte_dmadev.so.23.0 00:01:43.214 [613/745] Linking target lib/librte_stack.so.23.0 00:01:43.214 [614/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:43.215 [615/745] Linking target lib/librte_graph.so.23.0 00:01:43.215 [616/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:43.215 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:43.215 [618/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:43.215 [619/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:43.215 [620/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:43.215 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:43.215 [622/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:43.474 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:43.474 [624/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:43.474 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:43.474 [626/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:43.474 [627/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:43.474 [628/745] Linking target lib/librte_bbdev.so.23.0 00:01:43.474 [629/745] Linking target lib/librte_fib.so.23.0 00:01:43.474 [630/745] Linking target lib/librte_net.so.23.0 00:01:43.474 [631/745] Linking target lib/librte_compressdev.so.23.0 00:01:43.474 [632/745] Linking target lib/librte_distributor.so.23.0 00:01:43.474 [633/745] Linking target lib/librte_cryptodev.so.23.0 00:01:43.474 [634/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:43.474 [635/745] Linking target lib/librte_gpudev.so.23.0 00:01:43.474 [636/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:43.474 [637/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:43.474 [638/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:43.474 [639/745] Linking target lib/librte_reorder.so.23.0 00:01:43.474 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:43.474 [641/745] Linking target lib/librte_regexdev.so.23.0 00:01:43.474 [642/745] Linking target lib/librte_sched.so.23.0 00:01:43.474 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:43.733 [644/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:43.733 [645/745] Linking target lib/librte_hash.so.23.0 00:01:43.733 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:43.733 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:43.733 [648/745] Linking target lib/librte_cmdline.so.23.0 00:01:43.733 [649/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:43.733 [650/745] Linking target lib/librte_ethdev.so.23.0 00:01:43.733 [651/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:43.733 [652/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:43.733 [653/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:43.733 [654/745] Linking target lib/librte_security.so.23.0 00:01:43.733 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:43.991 [656/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:43.991 [657/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:43.991 [658/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:43.991 [659/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:43.991 [660/745] Linking target lib/librte_member.so.23.0 00:01:43.991 [661/745] Linking target lib/librte_power.so.23.0 00:01:43.991 [662/745] Linking target lib/librte_pcapng.so.23.0 00:01:43.991 [663/745] Linking target lib/librte_bpf.so.23.0 00:01:43.991 [664/745] Linking target lib/librte_lpm.so.23.0 00:01:43.991 [665/745] Linking target lib/librte_efd.so.23.0 00:01:43.991 [666/745] Linking target lib/librte_gro.so.23.0 00:01:43.991 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:43.991 [668/745] Linking target lib/librte_ip_frag.so.23.0 00:01:43.991 [669/745] Linking target lib/librte_gso.so.23.0 00:01:43.991 [670/745] Linking target lib/librte_metrics.so.23.0 00:01:43.991 [671/745] Linking target lib/librte_ipsec.so.23.0 00:01:43.991 [672/745] Linking target lib/librte_eventdev.so.23.0 00:01:43.991 [673/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:43.991 [674/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:44.250 [675/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:44.250 [676/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:44.250 [677/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:44.250 [678/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:44.250 [679/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:44.250 [680/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:44.250 [681/745] Linking target lib/librte_pdump.so.23.0 00:01:44.250 [682/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:44.250 [683/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:44.250 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:01:44.250 [685/745] Linking target lib/librte_latencystats.so.23.0 00:01:44.250 [686/745] Linking target lib/librte_port.so.23.0 00:01:44.508 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:44.508 [688/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:44.508 [689/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:44.508 [690/745] Linking target lib/librte_table.so.23.0 00:01:44.508 [691/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:44.767 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:44.767 [693/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:44.767 [694/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:44.767 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:45.024 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:45.024 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:45.282 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:45.282 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:45.540 [700/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.540 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:45.540 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:45.798 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:45.798 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:45.798 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:45.798 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:45.798 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:45.798 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:46.364 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:46.364 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:46.364 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.623 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:47.556 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:47.556 [714/745] Linking static target lib/librte_node.a 00:01:47.813 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.813 [716/745] Linking target lib/librte_node.so.23.0 00:01:47.813 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:48.377 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:48.634 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:56.772 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.847 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.847 [722/745] Linking static target lib/librte_vhost.a 00:02:28.847 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.847 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:41.051 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:41.051 [726/745] Linking static target lib/librte_pipeline.a 00:02:41.051 [727/745] Linking target app/dpdk-test-cmdline 00:02:41.051 [728/745] Linking target app/dpdk-test-gpudev 00:02:41.051 [729/745] Linking target app/dpdk-test-acl 00:02:41.051 [730/745] Linking target app/dpdk-test-fib 00:02:41.051 [731/745] Linking target app/dpdk-test-sad 00:02:41.051 [732/745] Linking target app/dpdk-test-flow-perf 00:02:41.051 [733/745] Linking target app/dpdk-test-regex 00:02:41.051 [734/745] Linking target app/dpdk-pdump 00:02:41.051 [735/745] Linking target app/dpdk-test-security-perf 00:02:41.051 [736/745] Linking target app/dpdk-test-pipeline 00:02:41.051 [737/745] Linking target app/dpdk-proc-info 00:02:41.051 [738/745] Linking target app/dpdk-dumpcap 00:02:41.051 [739/745] Linking target app/dpdk-test-eventdev 00:02:41.051 [740/745] Linking target app/dpdk-test-bbdev 00:02:41.051 [741/745] Linking target app/dpdk-test-compress-perf 00:02:41.051 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:41.051 [743/745] Linking target app/dpdk-testpmd 00:02:42.950 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.950 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:42.950 23:42:48 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:42.950 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:42.950 [0/1] Installing files. 00:02:43.211 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:43.211 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.211 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:43.212 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.218 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.218 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.219 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.789 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.789 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.789 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.789 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.789 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.792 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:43.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:43.793 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:43.793 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:43.793 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:43.793 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:43.793 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:43.793 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:43.793 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:43.793 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:43.793 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:43.793 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:43.793 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:43.793 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:43.793 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:43.793 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:43.793 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:43.793 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:43.793 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:43.793 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:43.794 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:43.794 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:43.794 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:43.794 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:43.794 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:43.794 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:43.794 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:43.794 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:43.794 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:43.794 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:43.794 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:43.794 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:43.794 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:43.794 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:43.794 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:43.794 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:43.794 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:43.794 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:43.794 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:43.794 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:43.794 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:43.794 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:43.794 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:43.794 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:43.794 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:43.794 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:43.794 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:43.794 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:43.794 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:43.794 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:43.794 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:43.794 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:43.794 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:43.794 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:43.794 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:43.794 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:43.794 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:43.794 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:43.794 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:43.794 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:43.794 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:43.794 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:43.794 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:43.794 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:43.794 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:43.794 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:43.794 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:43.794 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:43.794 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:43.794 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:43.794 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:43.794 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:43.794 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:43.794 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:43.794 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:43.794 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:43.794 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:43.794 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:43.794 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:43.794 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:43.794 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:43.794 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:43.794 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:43.794 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:43.794 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:43.794 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:43.794 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:43.794 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:43.794 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:43.794 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:43.795 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:43.795 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:43.795 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:43.795 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:43.795 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:43.795 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:43.795 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:43.795 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:43.795 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:43.795 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:43.795 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:43.795 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:43.795 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:43.795 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:43.795 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:43.795 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:43.795 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:43.795 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:43.795 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:43.795 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:43.795 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:43.795 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:43.795 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:43.795 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:43.795 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:43.795 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:43.795 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:43.795 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:43.795 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:43.795 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:43.795 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:43.795 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:43.795 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:43.795 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:43.795 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:43.795 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:43.795 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:43.795 23:42:49 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:43.795 23:42:49 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:43.795 23:42:49 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:43.795 23:42:49 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.795 00:02:43.795 real 1m22.565s 00:02:43.795 user 14m29.505s 00:02:43.795 sys 1m48.739s 00:02:43.795 23:42:49 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:43.795 23:42:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:43.795 ************************************ 00:02:43.795 END TEST build_native_dpdk 00:02:43.795 ************************************ 00:02:43.795 23:42:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:43.795 23:42:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:43.795 23:42:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:43.795 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:44.053 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.053 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.053 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:44.311 Using 'verbs' RDMA provider 00:02:54.846 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:02.989 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:03.247 Creating mk/config.mk...done. 00:03:03.247 Creating mk/cc.flags.mk...done. 00:03:03.247 Type 'make' to build. 00:03:03.247 23:43:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:03.247 23:43:08 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:03.247 23:43:08 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:03.247 23:43:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.247 ************************************ 00:03:03.247 START TEST make 00:03:03.247 ************************************ 00:03:03.247 23:43:08 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:03.504 make[1]: Nothing to be done for 'all'. 00:03:05.418 The Meson build system 00:03:05.418 Version: 1.3.1 00:03:05.418 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:05.418 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.418 Build type: native build 00:03:05.418 Project name: libvfio-user 00:03:05.418 Project version: 0.0.1 00:03:05.418 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:05.418 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:05.418 Host machine cpu family: x86_64 00:03:05.418 Host machine cpu: x86_64 00:03:05.418 Run-time dependency threads found: YES 00:03:05.418 Library dl found: YES 00:03:05.418 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:05.418 Run-time dependency json-c found: YES 0.17 00:03:05.418 Run-time dependency cmocka found: YES 1.1.7 00:03:05.418 Program pytest-3 found: NO 00:03:05.418 Program flake8 found: NO 00:03:05.418 Program misspell-fixer found: NO 00:03:05.418 Program restructuredtext-lint found: NO 00:03:05.418 Program valgrind found: YES (/usr/bin/valgrind) 00:03:05.418 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.418 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.418 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.418 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:05.418 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:05.419 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:05.419 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:05.419 Build targets in project: 8 00:03:05.419 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:05.419 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:05.419 00:03:05.419 libvfio-user 0.0.1 00:03:05.419 00:03:05.419 User defined options 00:03:05.419 buildtype : debug 00:03:05.419 default_library: shared 00:03:05.419 libdir : /usr/local/lib 00:03:05.419 00:03:05.419 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.002 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:06.002 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:06.002 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:06.269 [3/37] Compiling C object samples/null.p/null.c.o 00:03:06.269 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:06.269 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:06.269 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:06.269 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:06.269 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:06.269 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:06.269 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:06.269 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:06.269 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:06.269 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:06.269 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:06.269 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:06.269 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:06.269 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:06.269 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:06.269 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:06.269 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:06.269 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:06.269 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:06.269 [23/37] Compiling C object samples/server.p/server.c.o 00:03:06.269 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:06.269 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:06.269 [26/37] Compiling C object samples/client.p/client.c.o 00:03:06.531 [27/37] Linking target samples/client 00:03:06.531 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:06.531 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:06.531 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:06.531 [31/37] Linking target test/unit_tests 00:03:06.792 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:06.792 [33/37] Linking target samples/null 00:03:06.792 [34/37] Linking target samples/lspci 00:03:06.792 [35/37] Linking target samples/server 00:03:06.792 [36/37] Linking target samples/gpio-pci-idio-16 00:03:06.792 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:06.792 INFO: autodetecting backend as ninja 00:03:06.792 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.053 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.626 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:07.626 ninja: no work to do. 00:03:19.823 CC lib/ut_mock/mock.o 00:03:19.823 CC lib/log/log.o 00:03:19.823 CC lib/log/log_flags.o 00:03:19.823 CC lib/log/log_deprecated.o 00:03:19.823 CC lib/ut/ut.o 00:03:19.823 LIB libspdk_log.a 00:03:19.823 LIB libspdk_ut_mock.a 00:03:19.823 LIB libspdk_ut.a 00:03:19.823 SO libspdk_ut_mock.so.6.0 00:03:19.823 SO libspdk_ut.so.2.0 00:03:19.823 SO libspdk_log.so.7.0 00:03:19.823 SYMLINK libspdk_ut_mock.so 00:03:19.823 SYMLINK libspdk_ut.so 00:03:19.823 SYMLINK libspdk_log.so 00:03:19.823 CC lib/util/base64.o 00:03:19.823 CC lib/util/bit_array.o 00:03:19.823 CC lib/util/cpuset.o 00:03:19.823 CC lib/ioat/ioat.o 00:03:19.823 CXX lib/trace_parser/trace.o 00:03:19.823 CC lib/util/crc16.o 00:03:19.823 CC lib/dma/dma.o 00:03:19.823 CC lib/util/crc32.o 00:03:19.823 CC lib/util/crc32c.o 00:03:19.823 CC lib/util/crc32_ieee.o 00:03:19.823 CC lib/util/crc64.o 00:03:19.823 CC lib/util/dif.o 00:03:19.823 CC lib/util/fd.o 00:03:19.823 CC lib/util/file.o 00:03:19.823 CC lib/util/hexlify.o 00:03:19.823 CC lib/util/iov.o 00:03:19.823 CC lib/util/math.o 00:03:19.823 CC lib/util/pipe.o 00:03:19.823 CC lib/util/strerror_tls.o 00:03:19.823 CC lib/util/string.o 00:03:19.823 CC lib/util/uuid.o 00:03:19.823 CC lib/util/fd_group.o 00:03:19.823 CC lib/util/zipf.o 00:03:19.823 CC lib/util/xor.o 00:03:19.823 CC lib/vfio_user/host/vfio_user_pci.o 00:03:19.823 CC lib/vfio_user/host/vfio_user.o 00:03:19.823 LIB libspdk_dma.a 00:03:19.823 SO libspdk_dma.so.4.0 00:03:19.823 SYMLINK libspdk_dma.so 00:03:19.823 LIB libspdk_ioat.a 00:03:19.823 LIB libspdk_vfio_user.a 00:03:19.823 SO libspdk_ioat.so.7.0 00:03:19.823 SO libspdk_vfio_user.so.5.0 00:03:19.823 SYMLINK libspdk_ioat.so 00:03:19.823 SYMLINK libspdk_vfio_user.so 00:03:19.823 LIB libspdk_util.a 00:03:19.823 SO libspdk_util.so.9.0 00:03:20.081 SYMLINK libspdk_util.so 00:03:20.081 LIB libspdk_trace_parser.a 00:03:20.081 CC lib/rdma/common.o 00:03:20.081 CC lib/json/json_parse.o 00:03:20.081 CC lib/env_dpdk/env.o 00:03:20.081 CC lib/vmd/vmd.o 00:03:20.081 CC lib/rdma/rdma_verbs.o 00:03:20.081 CC lib/json/json_util.o 00:03:20.081 CC lib/vmd/led.o 00:03:20.081 CC lib/env_dpdk/memory.o 00:03:20.081 CC lib/json/json_write.o 00:03:20.081 CC lib/conf/conf.o 00:03:20.081 CC lib/idxd/idxd.o 00:03:20.081 CC lib/env_dpdk/pci.o 00:03:20.081 CC lib/idxd/idxd_user.o 00:03:20.081 CC lib/env_dpdk/init.o 00:03:20.081 CC lib/idxd/idxd_kernel.o 00:03:20.081 CC lib/env_dpdk/threads.o 00:03:20.081 CC lib/env_dpdk/pci_ioat.o 00:03:20.081 CC lib/env_dpdk/pci_virtio.o 00:03:20.081 CC lib/env_dpdk/pci_vmd.o 00:03:20.081 CC lib/env_dpdk/pci_idxd.o 00:03:20.081 CC lib/env_dpdk/pci_event.o 00:03:20.081 CC lib/env_dpdk/sigbus_handler.o 00:03:20.081 CC lib/env_dpdk/pci_dpdk.o 00:03:20.081 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:20.338 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:20.338 SO libspdk_trace_parser.so.5.0 00:03:20.338 SYMLINK libspdk_trace_parser.so 00:03:20.597 LIB libspdk_conf.a 00:03:20.597 LIB libspdk_rdma.a 00:03:20.597 SO libspdk_conf.so.6.0 00:03:20.597 SO libspdk_rdma.so.6.0 00:03:20.597 SYMLINK libspdk_conf.so 00:03:20.597 SYMLINK libspdk_rdma.so 00:03:20.597 LIB libspdk_json.a 00:03:20.597 SO libspdk_json.so.6.0 00:03:20.597 SYMLINK libspdk_json.so 00:03:20.855 LIB libspdk_idxd.a 00:03:20.855 SO libspdk_idxd.so.12.0 00:03:20.855 CC lib/jsonrpc/jsonrpc_server.o 00:03:20.855 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:20.855 CC lib/jsonrpc/jsonrpc_client.o 00:03:20.855 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:20.855 SYMLINK libspdk_idxd.so 00:03:20.855 LIB libspdk_vmd.a 00:03:20.855 SO libspdk_vmd.so.6.0 00:03:20.855 SYMLINK libspdk_vmd.so 00:03:21.113 LIB libspdk_jsonrpc.a 00:03:21.113 SO libspdk_jsonrpc.so.6.0 00:03:21.113 SYMLINK libspdk_jsonrpc.so 00:03:21.371 CC lib/rpc/rpc.o 00:03:21.629 LIB libspdk_rpc.a 00:03:21.629 SO libspdk_rpc.so.6.0 00:03:21.629 SYMLINK libspdk_rpc.so 00:03:21.887 CC lib/keyring/keyring.o 00:03:21.887 CC lib/trace/trace.o 00:03:21.887 CC lib/notify/notify.o 00:03:21.887 CC lib/notify/notify_rpc.o 00:03:21.887 CC lib/trace/trace_flags.o 00:03:21.887 CC lib/keyring/keyring_rpc.o 00:03:21.887 CC lib/trace/trace_rpc.o 00:03:22.144 LIB libspdk_notify.a 00:03:22.144 SO libspdk_notify.so.6.0 00:03:22.144 LIB libspdk_keyring.a 00:03:22.144 SYMLINK libspdk_notify.so 00:03:22.144 LIB libspdk_trace.a 00:03:22.144 SO libspdk_keyring.so.1.0 00:03:22.144 SO libspdk_trace.so.10.0 00:03:22.144 SYMLINK libspdk_keyring.so 00:03:22.144 SYMLINK libspdk_trace.so 00:03:22.144 LIB libspdk_env_dpdk.a 00:03:22.402 SO libspdk_env_dpdk.so.14.0 00:03:22.402 CC lib/thread/thread.o 00:03:22.402 CC lib/thread/iobuf.o 00:03:22.402 CC lib/sock/sock.o 00:03:22.402 CC lib/sock/sock_rpc.o 00:03:22.402 SYMLINK libspdk_env_dpdk.so 00:03:22.659 LIB libspdk_sock.a 00:03:22.659 SO libspdk_sock.so.9.0 00:03:22.917 SYMLINK libspdk_sock.so 00:03:22.917 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:22.917 CC lib/nvme/nvme_ctrlr.o 00:03:22.917 CC lib/nvme/nvme_fabric.o 00:03:22.917 CC lib/nvme/nvme_ns_cmd.o 00:03:22.917 CC lib/nvme/nvme_ns.o 00:03:22.917 CC lib/nvme/nvme_pcie_common.o 00:03:22.917 CC lib/nvme/nvme_pcie.o 00:03:22.917 CC lib/nvme/nvme_qpair.o 00:03:22.917 CC lib/nvme/nvme.o 00:03:22.917 CC lib/nvme/nvme_quirks.o 00:03:22.917 CC lib/nvme/nvme_transport.o 00:03:22.917 CC lib/nvme/nvme_discovery.o 00:03:22.917 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:22.917 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:22.917 CC lib/nvme/nvme_tcp.o 00:03:22.917 CC lib/nvme/nvme_opal.o 00:03:22.917 CC lib/nvme/nvme_io_msg.o 00:03:22.917 CC lib/nvme/nvme_poll_group.o 00:03:22.917 CC lib/nvme/nvme_zns.o 00:03:22.917 CC lib/nvme/nvme_stubs.o 00:03:22.917 CC lib/nvme/nvme_auth.o 00:03:22.917 CC lib/nvme/nvme_cuse.o 00:03:22.917 CC lib/nvme/nvme_vfio_user.o 00:03:22.917 CC lib/nvme/nvme_rdma.o 00:03:23.851 LIB libspdk_thread.a 00:03:23.851 SO libspdk_thread.so.10.0 00:03:24.109 SYMLINK libspdk_thread.so 00:03:24.109 CC lib/blob/blobstore.o 00:03:24.109 CC lib/vfu_tgt/tgt_endpoint.o 00:03:24.109 CC lib/virtio/virtio.o 00:03:24.109 CC lib/init/json_config.o 00:03:24.109 CC lib/accel/accel.o 00:03:24.109 CC lib/blob/request.o 00:03:24.109 CC lib/vfu_tgt/tgt_rpc.o 00:03:24.109 CC lib/virtio/virtio_vhost_user.o 00:03:24.109 CC lib/accel/accel_rpc.o 00:03:24.109 CC lib/init/subsystem.o 00:03:24.109 CC lib/init/subsystem_rpc.o 00:03:24.109 CC lib/blob/zeroes.o 00:03:24.109 CC lib/accel/accel_sw.o 00:03:24.109 CC lib/virtio/virtio_vfio_user.o 00:03:24.109 CC lib/virtio/virtio_pci.o 00:03:24.109 CC lib/blob/blob_bs_dev.o 00:03:24.109 CC lib/init/rpc.o 00:03:24.366 LIB libspdk_init.a 00:03:24.623 SO libspdk_init.so.5.0 00:03:24.623 LIB libspdk_vfu_tgt.a 00:03:24.623 LIB libspdk_virtio.a 00:03:24.623 SYMLINK libspdk_init.so 00:03:24.623 SO libspdk_vfu_tgt.so.3.0 00:03:24.623 SO libspdk_virtio.so.7.0 00:03:24.623 SYMLINK libspdk_vfu_tgt.so 00:03:24.623 SYMLINK libspdk_virtio.so 00:03:24.623 CC lib/event/app.o 00:03:24.623 CC lib/event/reactor.o 00:03:24.623 CC lib/event/log_rpc.o 00:03:24.623 CC lib/event/app_rpc.o 00:03:24.623 CC lib/event/scheduler_static.o 00:03:25.190 LIB libspdk_event.a 00:03:25.190 SO libspdk_event.so.13.0 00:03:25.190 SYMLINK libspdk_event.so 00:03:25.190 LIB libspdk_accel.a 00:03:25.190 SO libspdk_accel.so.15.0 00:03:25.448 SYMLINK libspdk_accel.so 00:03:25.448 LIB libspdk_nvme.a 00:03:25.448 CC lib/bdev/bdev.o 00:03:25.448 CC lib/bdev/bdev_rpc.o 00:03:25.448 CC lib/bdev/bdev_zone.o 00:03:25.448 CC lib/bdev/part.o 00:03:25.448 CC lib/bdev/scsi_nvme.o 00:03:25.448 SO libspdk_nvme.so.13.0 00:03:26.014 SYMLINK libspdk_nvme.so 00:03:27.388 LIB libspdk_blob.a 00:03:27.388 SO libspdk_blob.so.11.0 00:03:27.388 SYMLINK libspdk_blob.so 00:03:27.388 CC lib/blobfs/blobfs.o 00:03:27.388 CC lib/blobfs/tree.o 00:03:27.388 CC lib/lvol/lvol.o 00:03:27.961 LIB libspdk_bdev.a 00:03:27.961 SO libspdk_bdev.so.15.0 00:03:28.251 SYMLINK libspdk_bdev.so 00:03:28.251 LIB libspdk_blobfs.a 00:03:28.251 SO libspdk_blobfs.so.10.0 00:03:28.251 CC lib/nvmf/ctrlr.o 00:03:28.251 CC lib/nbd/nbd.o 00:03:28.251 CC lib/ublk/ublk.o 00:03:28.251 CC lib/scsi/dev.o 00:03:28.251 CC lib/nvmf/ctrlr_discovery.o 00:03:28.251 CC lib/ublk/ublk_rpc.o 00:03:28.251 CC lib/nbd/nbd_rpc.o 00:03:28.251 CC lib/scsi/lun.o 00:03:28.251 CC lib/nvmf/ctrlr_bdev.o 00:03:28.251 CC lib/ftl/ftl_core.o 00:03:28.251 CC lib/nvmf/subsystem.o 00:03:28.251 CC lib/scsi/port.o 00:03:28.251 CC lib/ftl/ftl_init.o 00:03:28.251 CC lib/scsi/scsi.o 00:03:28.251 CC lib/nvmf/nvmf.o 00:03:28.251 CC lib/ftl/ftl_layout.o 00:03:28.251 CC lib/scsi/scsi_bdev.o 00:03:28.251 CC lib/nvmf/nvmf_rpc.o 00:03:28.251 CC lib/nvmf/transport.o 00:03:28.251 CC lib/ftl/ftl_debug.o 00:03:28.251 CC lib/scsi/scsi_pr.o 00:03:28.251 CC lib/ftl/ftl_io.o 00:03:28.251 CC lib/scsi/scsi_rpc.o 00:03:28.251 CC lib/nvmf/tcp.o 00:03:28.251 CC lib/ftl/ftl_sb.o 00:03:28.251 CC lib/nvmf/stubs.o 00:03:28.251 CC lib/nvmf/mdns_server.o 00:03:28.251 CC lib/scsi/task.o 00:03:28.251 CC lib/ftl/ftl_l2p.o 00:03:28.251 CC lib/nvmf/vfio_user.o 00:03:28.251 CC lib/ftl/ftl_l2p_flat.o 00:03:28.251 CC lib/ftl/ftl_nv_cache.o 00:03:28.251 CC lib/nvmf/rdma.o 00:03:28.251 CC lib/ftl/ftl_band.o 00:03:28.251 CC lib/nvmf/auth.o 00:03:28.251 CC lib/ftl/ftl_band_ops.o 00:03:28.251 CC lib/ftl/ftl_writer.o 00:03:28.251 CC lib/ftl/ftl_rq.o 00:03:28.251 CC lib/ftl/ftl_reloc.o 00:03:28.251 CC lib/ftl/ftl_l2p_cache.o 00:03:28.251 CC lib/ftl/ftl_p2l.o 00:03:28.251 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.251 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.251 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.251 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.251 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.517 SYMLINK libspdk_blobfs.so 00:03:28.517 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.517 LIB libspdk_lvol.a 00:03:28.517 SO libspdk_lvol.so.10.0 00:03:28.783 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.784 SYMLINK libspdk_lvol.so 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.784 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.784 CC lib/ftl/utils/ftl_conf.o 00:03:28.784 CC lib/ftl/utils/ftl_md.o 00:03:28.784 CC lib/ftl/utils/ftl_mempool.o 00:03:28.784 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.784 CC lib/ftl/utils/ftl_property.o 00:03:28.784 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.784 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.784 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.784 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.784 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.784 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.784 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.042 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.042 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.042 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.042 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.042 CC lib/ftl/base/ftl_base_dev.o 00:03:29.042 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.042 CC lib/ftl/ftl_trace.o 00:03:29.300 LIB libspdk_nbd.a 00:03:29.300 SO libspdk_nbd.so.7.0 00:03:29.300 SYMLINK libspdk_nbd.so 00:03:29.300 LIB libspdk_scsi.a 00:03:29.300 SO libspdk_scsi.so.9.0 00:03:29.300 LIB libspdk_ublk.a 00:03:29.559 SYMLINK libspdk_scsi.so 00:03:29.559 SO libspdk_ublk.so.3.0 00:03:29.559 SYMLINK libspdk_ublk.so 00:03:29.559 CC lib/vhost/vhost.o 00:03:29.559 CC lib/vhost/vhost_rpc.o 00:03:29.559 CC lib/vhost/vhost_scsi.o 00:03:29.559 CC lib/vhost/vhost_blk.o 00:03:29.559 CC lib/vhost/rte_vhost_user.o 00:03:29.559 CC lib/iscsi/conn.o 00:03:29.559 CC lib/iscsi/init_grp.o 00:03:29.559 CC lib/iscsi/iscsi.o 00:03:29.559 CC lib/iscsi/md5.o 00:03:29.559 CC lib/iscsi/param.o 00:03:29.559 CC lib/iscsi/portal_grp.o 00:03:29.559 CC lib/iscsi/tgt_node.o 00:03:29.559 CC lib/iscsi/iscsi_subsystem.o 00:03:29.559 CC lib/iscsi/iscsi_rpc.o 00:03:29.559 CC lib/iscsi/task.o 00:03:29.817 LIB libspdk_ftl.a 00:03:30.074 SO libspdk_ftl.so.9.0 00:03:30.333 SYMLINK libspdk_ftl.so 00:03:30.900 LIB libspdk_vhost.a 00:03:30.900 SO libspdk_vhost.so.8.0 00:03:30.900 LIB libspdk_nvmf.a 00:03:30.900 SYMLINK libspdk_vhost.so 00:03:30.900 SO libspdk_nvmf.so.18.0 00:03:30.900 LIB libspdk_iscsi.a 00:03:31.158 SO libspdk_iscsi.so.8.0 00:03:31.158 SYMLINK libspdk_nvmf.so 00:03:31.158 SYMLINK libspdk_iscsi.so 00:03:31.416 CC module/vfu_device/vfu_virtio.o 00:03:31.416 CC module/vfu_device/vfu_virtio_blk.o 00:03:31.416 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.416 CC module/vfu_device/vfu_virtio_scsi.o 00:03:31.416 CC module/vfu_device/vfu_virtio_rpc.o 00:03:31.674 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.674 CC module/accel/error/accel_error.o 00:03:31.674 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.674 CC module/accel/iaa/accel_iaa.o 00:03:31.674 CC module/accel/ioat/accel_ioat.o 00:03:31.674 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.674 CC module/accel/error/accel_error_rpc.o 00:03:31.674 CC module/keyring/linux/keyring.o 00:03:31.674 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.674 CC module/sock/posix/posix.o 00:03:31.674 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.674 CC module/accel/dsa/accel_dsa.o 00:03:31.674 CC module/keyring/linux/keyring_rpc.o 00:03:31.674 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.674 CC module/blob/bdev/blob_bdev.o 00:03:31.674 CC module/keyring/file/keyring.o 00:03:31.674 CC module/keyring/file/keyring_rpc.o 00:03:31.674 LIB libspdk_env_dpdk_rpc.a 00:03:31.674 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.674 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.674 LIB libspdk_keyring_linux.a 00:03:31.674 LIB libspdk_keyring_file.a 00:03:31.674 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.674 LIB libspdk_scheduler_gscheduler.a 00:03:31.674 SO libspdk_keyring_linux.so.1.0 00:03:31.674 SO libspdk_keyring_file.so.1.0 00:03:31.674 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.674 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.674 LIB libspdk_accel_error.a 00:03:31.674 LIB libspdk_scheduler_dynamic.a 00:03:31.932 LIB libspdk_accel_ioat.a 00:03:31.932 LIB libspdk_accel_iaa.a 00:03:31.932 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.932 SO libspdk_accel_error.so.2.0 00:03:31.932 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.932 SYMLINK libspdk_keyring_linux.so 00:03:31.932 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.932 SO libspdk_accel_ioat.so.6.0 00:03:31.932 SYMLINK libspdk_keyring_file.so 00:03:31.932 SO libspdk_accel_iaa.so.3.0 00:03:31.932 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.932 LIB libspdk_accel_dsa.a 00:03:31.932 SYMLINK libspdk_accel_error.so 00:03:31.932 LIB libspdk_blob_bdev.a 00:03:31.932 SYMLINK libspdk_accel_ioat.so 00:03:31.932 SO libspdk_accel_dsa.so.5.0 00:03:31.932 SYMLINK libspdk_accel_iaa.so 00:03:31.932 SO libspdk_blob_bdev.so.11.0 00:03:31.932 SYMLINK libspdk_accel_dsa.so 00:03:31.932 SYMLINK libspdk_blob_bdev.so 00:03:32.191 LIB libspdk_vfu_device.a 00:03:32.191 SO libspdk_vfu_device.so.3.0 00:03:32.191 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.191 CC module/bdev/error/vbdev_error.o 00:03:32.191 CC module/bdev/delay/vbdev_delay.o 00:03:32.191 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.191 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.191 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.191 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.191 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.191 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.191 CC module/bdev/malloc/bdev_malloc.o 00:03:32.191 CC module/bdev/ftl/bdev_ftl.o 00:03:32.191 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.191 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.191 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.191 CC module/bdev/split/vbdev_split.o 00:03:32.191 CC module/bdev/null/bdev_null.o 00:03:32.191 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.191 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.191 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.191 CC module/bdev/gpt/gpt.o 00:03:32.191 CC module/bdev/raid/bdev_raid.o 00:03:32.191 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.191 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.191 CC module/bdev/null/bdev_null_rpc.o 00:03:32.191 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.191 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.191 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.191 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.191 CC module/bdev/aio/bdev_aio.o 00:03:32.191 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.191 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.191 CC module/bdev/nvme/bdev_nvme.o 00:03:32.191 CC module/bdev/raid/raid0.o 00:03:32.191 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.191 CC module/bdev/raid/raid1.o 00:03:32.191 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.192 CC module/bdev/nvme/nvme_rpc.o 00:03:32.192 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.192 CC module/bdev/raid/concat.o 00:03:32.192 CC module/bdev/nvme/vbdev_opal.o 00:03:32.192 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.192 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.450 SYMLINK libspdk_vfu_device.so 00:03:32.450 LIB libspdk_sock_posix.a 00:03:32.450 SO libspdk_sock_posix.so.6.0 00:03:32.708 LIB libspdk_blobfs_bdev.a 00:03:32.708 SYMLINK libspdk_sock_posix.so 00:03:32.708 SO libspdk_blobfs_bdev.so.6.0 00:03:32.708 LIB libspdk_bdev_split.a 00:03:32.708 SO libspdk_bdev_split.so.6.0 00:03:32.708 SYMLINK libspdk_blobfs_bdev.so 00:03:32.708 LIB libspdk_bdev_aio.a 00:03:32.708 LIB libspdk_bdev_null.a 00:03:32.708 LIB libspdk_bdev_error.a 00:03:32.708 SYMLINK libspdk_bdev_split.so 00:03:32.708 SO libspdk_bdev_aio.so.6.0 00:03:32.708 SO libspdk_bdev_null.so.6.0 00:03:32.708 LIB libspdk_bdev_passthru.a 00:03:32.708 SO libspdk_bdev_error.so.6.0 00:03:32.708 LIB libspdk_bdev_gpt.a 00:03:32.708 LIB libspdk_bdev_ftl.a 00:03:32.708 LIB libspdk_bdev_malloc.a 00:03:32.708 SO libspdk_bdev_passthru.so.6.0 00:03:32.708 SO libspdk_bdev_gpt.so.6.0 00:03:32.708 LIB libspdk_bdev_delay.a 00:03:32.708 SO libspdk_bdev_ftl.so.6.0 00:03:32.708 SYMLINK libspdk_bdev_aio.so 00:03:32.708 SO libspdk_bdev_malloc.so.6.0 00:03:32.708 SYMLINK libspdk_bdev_null.so 00:03:32.708 SYMLINK libspdk_bdev_error.so 00:03:32.708 SO libspdk_bdev_delay.so.6.0 00:03:32.708 LIB libspdk_bdev_zone_block.a 00:03:32.967 SYMLINK libspdk_bdev_passthru.so 00:03:32.967 SYMLINK libspdk_bdev_gpt.so 00:03:32.967 SO libspdk_bdev_zone_block.so.6.0 00:03:32.967 LIB libspdk_bdev_iscsi.a 00:03:32.967 SYMLINK libspdk_bdev_ftl.so 00:03:32.967 SYMLINK libspdk_bdev_malloc.so 00:03:32.967 SYMLINK libspdk_bdev_delay.so 00:03:32.967 SO libspdk_bdev_iscsi.so.6.0 00:03:32.967 SYMLINK libspdk_bdev_zone_block.so 00:03:32.967 SYMLINK libspdk_bdev_iscsi.so 00:03:32.967 LIB libspdk_bdev_lvol.a 00:03:32.967 LIB libspdk_bdev_virtio.a 00:03:32.967 SO libspdk_bdev_lvol.so.6.0 00:03:32.967 SO libspdk_bdev_virtio.so.6.0 00:03:32.967 SYMLINK libspdk_bdev_lvol.so 00:03:33.226 SYMLINK libspdk_bdev_virtio.so 00:03:33.485 LIB libspdk_bdev_raid.a 00:03:33.485 SO libspdk_bdev_raid.so.6.0 00:03:33.485 SYMLINK libspdk_bdev_raid.so 00:03:34.422 LIB libspdk_bdev_nvme.a 00:03:34.680 SO libspdk_bdev_nvme.so.7.0 00:03:34.680 SYMLINK libspdk_bdev_nvme.so 00:03:34.939 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.939 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:34.939 CC module/event/subsystems/keyring/keyring.o 00:03:34.939 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.939 CC module/event/subsystems/sock/sock.o 00:03:34.939 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.939 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.939 CC module/event/subsystems/vmd/vmd.o 00:03:34.939 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.197 LIB libspdk_event_keyring.a 00:03:35.197 LIB libspdk_event_sock.a 00:03:35.197 LIB libspdk_event_scheduler.a 00:03:35.197 LIB libspdk_event_vfu_tgt.a 00:03:35.197 LIB libspdk_event_vhost_blk.a 00:03:35.197 LIB libspdk_event_vmd.a 00:03:35.197 SO libspdk_event_keyring.so.1.0 00:03:35.197 SO libspdk_event_sock.so.5.0 00:03:35.197 SO libspdk_event_scheduler.so.4.0 00:03:35.197 LIB libspdk_event_iobuf.a 00:03:35.197 SO libspdk_event_vfu_tgt.so.3.0 00:03:35.197 SO libspdk_event_vhost_blk.so.3.0 00:03:35.197 SO libspdk_event_vmd.so.6.0 00:03:35.197 SO libspdk_event_iobuf.so.3.0 00:03:35.197 SYMLINK libspdk_event_keyring.so 00:03:35.197 SYMLINK libspdk_event_sock.so 00:03:35.197 SYMLINK libspdk_event_scheduler.so 00:03:35.197 SYMLINK libspdk_event_vfu_tgt.so 00:03:35.197 SYMLINK libspdk_event_vhost_blk.so 00:03:35.197 SYMLINK libspdk_event_vmd.so 00:03:35.197 SYMLINK libspdk_event_iobuf.so 00:03:35.455 CC module/event/subsystems/accel/accel.o 00:03:35.713 LIB libspdk_event_accel.a 00:03:35.713 SO libspdk_event_accel.so.6.0 00:03:35.713 SYMLINK libspdk_event_accel.so 00:03:35.972 CC module/event/subsystems/bdev/bdev.o 00:03:35.972 LIB libspdk_event_bdev.a 00:03:35.972 SO libspdk_event_bdev.so.6.0 00:03:35.972 SYMLINK libspdk_event_bdev.so 00:03:36.230 CC module/event/subsystems/nbd/nbd.o 00:03:36.230 CC module/event/subsystems/scsi/scsi.o 00:03:36.230 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.230 CC module/event/subsystems/ublk/ublk.o 00:03:36.230 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.488 LIB libspdk_event_ublk.a 00:03:36.489 LIB libspdk_event_nbd.a 00:03:36.489 LIB libspdk_event_scsi.a 00:03:36.489 SO libspdk_event_nbd.so.6.0 00:03:36.489 SO libspdk_event_ublk.so.3.0 00:03:36.489 SO libspdk_event_scsi.so.6.0 00:03:36.489 SYMLINK libspdk_event_ublk.so 00:03:36.489 SYMLINK libspdk_event_nbd.so 00:03:36.489 LIB libspdk_event_nvmf.a 00:03:36.489 SYMLINK libspdk_event_scsi.so 00:03:36.489 SO libspdk_event_nvmf.so.6.0 00:03:36.489 SYMLINK libspdk_event_nvmf.so 00:03:36.746 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:36.746 CC module/event/subsystems/iscsi/iscsi.o 00:03:36.746 LIB libspdk_event_vhost_scsi.a 00:03:36.746 SO libspdk_event_vhost_scsi.so.3.0 00:03:36.746 LIB libspdk_event_iscsi.a 00:03:36.746 SO libspdk_event_iscsi.so.6.0 00:03:37.006 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.006 SYMLINK libspdk_event_iscsi.so 00:03:37.006 SO libspdk.so.6.0 00:03:37.006 SYMLINK libspdk.so 00:03:37.264 CC app/trace_record/trace_record.o 00:03:37.264 CXX app/trace/trace.o 00:03:37.264 CC app/spdk_nvme_identify/identify.o 00:03:37.264 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.264 CC test/rpc_client/rpc_client_test.o 00:03:37.264 TEST_HEADER include/spdk/accel.h 00:03:37.264 CC app/spdk_top/spdk_top.o 00:03:37.264 TEST_HEADER include/spdk/accel_module.h 00:03:37.264 CC app/spdk_nvme_perf/perf.o 00:03:37.264 TEST_HEADER include/spdk/assert.h 00:03:37.264 TEST_HEADER include/spdk/barrier.h 00:03:37.264 CC app/spdk_lspci/spdk_lspci.o 00:03:37.264 TEST_HEADER include/spdk/base64.h 00:03:37.264 TEST_HEADER include/spdk/bdev.h 00:03:37.264 TEST_HEADER include/spdk/bdev_module.h 00:03:37.264 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.264 TEST_HEADER include/spdk/bit_array.h 00:03:37.264 TEST_HEADER include/spdk/bit_pool.h 00:03:37.264 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.264 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.264 TEST_HEADER include/spdk/blobfs.h 00:03:37.264 TEST_HEADER include/spdk/blob.h 00:03:37.264 TEST_HEADER include/spdk/conf.h 00:03:37.264 TEST_HEADER include/spdk/config.h 00:03:37.264 TEST_HEADER include/spdk/cpuset.h 00:03:37.264 CC app/spdk_dd/spdk_dd.o 00:03:37.264 TEST_HEADER include/spdk/crc16.h 00:03:37.264 TEST_HEADER include/spdk/crc32.h 00:03:37.264 TEST_HEADER include/spdk/crc64.h 00:03:37.264 TEST_HEADER include/spdk/dif.h 00:03:37.264 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.264 TEST_HEADER include/spdk/dma.h 00:03:37.264 TEST_HEADER include/spdk/endian.h 00:03:37.264 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.264 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.264 TEST_HEADER include/spdk/env.h 00:03:37.264 CC app/nvmf_tgt/nvmf_main.o 00:03:37.264 TEST_HEADER include/spdk/event.h 00:03:37.264 TEST_HEADER include/spdk/fd_group.h 00:03:37.264 CC app/vhost/vhost.o 00:03:37.264 TEST_HEADER include/spdk/fd.h 00:03:37.264 TEST_HEADER include/spdk/file.h 00:03:37.264 TEST_HEADER include/spdk/ftl.h 00:03:37.264 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.264 TEST_HEADER include/spdk/hexlify.h 00:03:37.264 TEST_HEADER include/spdk/histogram_data.h 00:03:37.264 TEST_HEADER include/spdk/idxd.h 00:03:37.264 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.264 TEST_HEADER include/spdk/init.h 00:03:37.526 TEST_HEADER include/spdk/ioat.h 00:03:37.526 TEST_HEADER include/spdk/ioat_spec.h 00:03:37.526 CC test/app/histogram_perf/histogram_perf.o 00:03:37.526 TEST_HEADER include/spdk/iscsi_spec.h 00:03:37.526 CC test/app/jsoncat/jsoncat.o 00:03:37.526 TEST_HEADER include/spdk/json.h 00:03:37.526 CC app/spdk_tgt/spdk_tgt.o 00:03:37.526 TEST_HEADER include/spdk/jsonrpc.h 00:03:37.526 CC examples/ioat/perf/perf.o 00:03:37.526 TEST_HEADER include/spdk/keyring.h 00:03:37.526 CC examples/nvme/reconnect/reconnect.o 00:03:37.526 CC examples/idxd/perf/perf.o 00:03:37.526 TEST_HEADER include/spdk/keyring_module.h 00:03:37.526 CC app/fio/nvme/fio_plugin.o 00:03:37.526 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.526 CC examples/accel/perf/accel_perf.o 00:03:37.526 CC examples/ioat/verify/verify.o 00:03:37.526 TEST_HEADER include/spdk/likely.h 00:03:37.526 CC test/event/reactor/reactor.o 00:03:37.526 TEST_HEADER include/spdk/log.h 00:03:37.526 CC test/event/event_perf/event_perf.o 00:03:37.526 CC examples/sock/hello_world/hello_sock.o 00:03:37.526 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.526 TEST_HEADER include/spdk/lvol.h 00:03:37.526 CC examples/util/zipf/zipf.o 00:03:37.526 CC test/nvme/aer/aer.o 00:03:37.526 TEST_HEADER include/spdk/memory.h 00:03:37.526 CC test/app/stub/stub.o 00:03:37.526 CC examples/nvme/arbitration/arbitration.o 00:03:37.526 TEST_HEADER include/spdk/mmio.h 00:03:37.526 CC test/thread/poller_perf/poller_perf.o 00:03:37.526 TEST_HEADER include/spdk/nbd.h 00:03:37.526 CC examples/nvme/hello_world/hello_world.o 00:03:37.526 TEST_HEADER include/spdk/notify.h 00:03:37.526 TEST_HEADER include/spdk/nvme.h 00:03:37.526 TEST_HEADER include/spdk/nvme_intel.h 00:03:37.526 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:37.526 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:37.526 TEST_HEADER include/spdk/nvme_spec.h 00:03:37.526 TEST_HEADER include/spdk/nvme_zns.h 00:03:37.526 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:37.526 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:37.526 TEST_HEADER include/spdk/nvmf.h 00:03:37.526 CC examples/blob/hello_world/hello_blob.o 00:03:37.526 TEST_HEADER include/spdk/nvmf_spec.h 00:03:37.526 CC test/bdev/bdevio/bdevio.o 00:03:37.526 CC test/blobfs/mkfs/mkfs.o 00:03:37.526 CC test/accel/dif/dif.o 00:03:37.526 CC examples/blob/cli/blobcli.o 00:03:37.526 TEST_HEADER include/spdk/nvmf_transport.h 00:03:37.526 CC test/app/bdev_svc/bdev_svc.o 00:03:37.526 CC examples/thread/thread/thread_ex.o 00:03:37.526 TEST_HEADER include/spdk/opal.h 00:03:37.526 CC test/dma/test_dma/test_dma.o 00:03:37.526 TEST_HEADER include/spdk/opal_spec.h 00:03:37.526 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.526 CC examples/nvmf/nvmf/nvmf.o 00:03:37.526 TEST_HEADER include/spdk/pci_ids.h 00:03:37.526 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.526 TEST_HEADER include/spdk/pipe.h 00:03:37.526 TEST_HEADER include/spdk/queue.h 00:03:37.526 TEST_HEADER include/spdk/reduce.h 00:03:37.526 TEST_HEADER include/spdk/rpc.h 00:03:37.526 TEST_HEADER include/spdk/scheduler.h 00:03:37.526 TEST_HEADER include/spdk/scsi.h 00:03:37.526 TEST_HEADER include/spdk/scsi_spec.h 00:03:37.526 TEST_HEADER include/spdk/sock.h 00:03:37.526 TEST_HEADER include/spdk/stdinc.h 00:03:37.526 TEST_HEADER include/spdk/string.h 00:03:37.526 TEST_HEADER include/spdk/thread.h 00:03:37.526 TEST_HEADER include/spdk/trace.h 00:03:37.526 TEST_HEADER include/spdk/trace_parser.h 00:03:37.526 CC test/lvol/esnap/esnap.o 00:03:37.526 TEST_HEADER include/spdk/tree.h 00:03:37.526 CC test/env/mem_callbacks/mem_callbacks.o 00:03:37.526 TEST_HEADER include/spdk/ublk.h 00:03:37.526 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.526 LINK spdk_lspci 00:03:37.526 TEST_HEADER include/spdk/util.h 00:03:37.526 TEST_HEADER include/spdk/uuid.h 00:03:37.526 TEST_HEADER include/spdk/version.h 00:03:37.526 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:37.526 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:37.526 TEST_HEADER include/spdk/vhost.h 00:03:37.526 TEST_HEADER include/spdk/vmd.h 00:03:37.526 TEST_HEADER include/spdk/xor.h 00:03:37.526 TEST_HEADER include/spdk/zipf.h 00:03:37.526 CXX test/cpp_headers/accel.o 00:03:37.526 LINK rpc_client_test 00:03:37.796 LINK spdk_nvme_discover 00:03:37.796 LINK jsoncat 00:03:37.796 LINK lsvmd 00:03:37.796 LINK interrupt_tgt 00:03:37.796 LINK histogram_perf 00:03:37.796 LINK reactor 00:03:37.796 LINK nvmf_tgt 00:03:37.796 LINK event_perf 00:03:37.797 LINK zipf 00:03:37.797 LINK vhost 00:03:37.797 LINK poller_perf 00:03:37.797 LINK iscsi_tgt 00:03:37.797 LINK spdk_trace_record 00:03:37.797 LINK stub 00:03:37.797 LINK ioat_perf 00:03:37.797 LINK spdk_tgt 00:03:37.797 LINK verify 00:03:37.797 LINK bdev_svc 00:03:37.797 LINK mkfs 00:03:37.797 LINK hello_world 00:03:38.063 LINK hello_sock 00:03:38.063 LINK mem_callbacks 00:03:38.063 CXX test/cpp_headers/accel_module.o 00:03:38.063 LINK hello_blob 00:03:38.063 LINK hello_bdev 00:03:38.063 LINK aer 00:03:38.063 LINK thread 00:03:38.064 LINK spdk_dd 00:03:38.064 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.064 LINK nvmf 00:03:38.064 LINK arbitration 00:03:38.064 LINK idxd_perf 00:03:38.064 CXX test/cpp_headers/assert.o 00:03:38.064 LINK spdk_trace 00:03:38.064 LINK reconnect 00:03:38.064 CXX test/cpp_headers/barrier.o 00:03:38.064 CC test/nvme/reset/reset.o 00:03:38.324 CC examples/nvme/hotplug/hotplug.o 00:03:38.324 LINK bdevio 00:03:38.324 CC examples/vmd/led/led.o 00:03:38.324 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.324 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.324 LINK test_dma 00:03:38.324 CXX test/cpp_headers/base64.o 00:03:38.324 CC examples/nvme/abort/abort.o 00:03:38.324 CC test/nvme/sgl/sgl.o 00:03:38.324 CXX test/cpp_headers/bdev.o 00:03:38.324 CC test/event/reactor_perf/reactor_perf.o 00:03:38.324 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.324 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.324 CC app/fio/bdev/fio_plugin.o 00:03:38.324 CC test/nvme/e2edp/nvme_dp.o 00:03:38.324 CXX test/cpp_headers/bdev_module.o 00:03:38.324 CC test/nvme/overhead/overhead.o 00:03:38.324 CC test/env/vtophys/vtophys.o 00:03:38.324 LINK nvme_fuzz 00:03:38.324 LINK nvme_manage 00:03:38.324 LINK dif 00:03:38.324 CXX test/cpp_headers/bdev_zone.o 00:03:38.324 LINK accel_perf 00:03:38.324 CC test/event/app_repeat/app_repeat.o 00:03:38.587 CXX test/cpp_headers/bit_array.o 00:03:38.587 CC test/nvme/err_injection/err_injection.o 00:03:38.587 CC test/event/scheduler/scheduler.o 00:03:38.587 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.587 LINK blobcli 00:03:38.587 CXX test/cpp_headers/bit_pool.o 00:03:38.587 LINK spdk_nvme 00:03:38.587 LINK led 00:03:38.587 CC test/nvme/startup/startup.o 00:03:38.587 CC test/env/memory/memory_ut.o 00:03:38.587 CXX test/cpp_headers/blob_bdev.o 00:03:38.587 CC test/nvme/reserve/reserve.o 00:03:38.587 LINK cmb_copy 00:03:38.587 LINK reactor_perf 00:03:38.587 CC test/env/pci/pci_ut.o 00:03:38.587 CC test/nvme/simple_copy/simple_copy.o 00:03:38.587 CC test/nvme/connect_stress/connect_stress.o 00:03:38.587 CC test/nvme/boot_partition/boot_partition.o 00:03:38.587 LINK hotplug 00:03:38.587 LINK reset 00:03:38.850 CC test/nvme/compliance/nvme_compliance.o 00:03:38.850 LINK pmr_persistence 00:03:38.850 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.850 CXX test/cpp_headers/blobfs_bdev.o 00:03:38.850 LINK vtophys 00:03:38.850 CXX test/cpp_headers/blobfs.o 00:03:38.850 CXX test/cpp_headers/blob.o 00:03:38.850 CXX test/cpp_headers/conf.o 00:03:38.850 CC test/nvme/fdp/fdp.o 00:03:38.850 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.850 CXX test/cpp_headers/config.o 00:03:38.850 LINK app_repeat 00:03:38.850 CXX test/cpp_headers/cpuset.o 00:03:38.850 CXX test/cpp_headers/crc16.o 00:03:38.850 LINK spdk_nvme_perf 00:03:38.850 CXX test/cpp_headers/crc32.o 00:03:38.850 CXX test/cpp_headers/crc64.o 00:03:38.850 CXX test/cpp_headers/dif.o 00:03:38.850 CC test/nvme/cuse/cuse.o 00:03:38.850 LINK env_dpdk_post_init 00:03:38.850 LINK err_injection 00:03:38.850 LINK sgl 00:03:38.850 CXX test/cpp_headers/dma.o 00:03:38.850 CXX test/cpp_headers/endian.o 00:03:38.850 LINK nvme_dp 00:03:38.850 CXX test/cpp_headers/env_dpdk.o 00:03:38.850 CXX test/cpp_headers/env.o 00:03:38.850 LINK startup 00:03:38.850 LINK overhead 00:03:38.850 LINK scheduler 00:03:39.116 CXX test/cpp_headers/event.o 00:03:39.116 LINK spdk_top 00:03:39.116 CXX test/cpp_headers/fd_group.o 00:03:39.116 CXX test/cpp_headers/fd.o 00:03:39.116 CXX test/cpp_headers/file.o 00:03:39.116 LINK bdevperf 00:03:39.116 LINK abort 00:03:39.116 LINK reserve 00:03:39.116 LINK spdk_nvme_identify 00:03:39.116 CXX test/cpp_headers/ftl.o 00:03:39.116 LINK boot_partition 00:03:39.116 CXX test/cpp_headers/gpt_spec.o 00:03:39.116 CXX test/cpp_headers/hexlify.o 00:03:39.116 LINK connect_stress 00:03:39.116 CXX test/cpp_headers/histogram_data.o 00:03:39.116 CXX test/cpp_headers/idxd.o 00:03:39.116 CXX test/cpp_headers/idxd_spec.o 00:03:39.116 CXX test/cpp_headers/init.o 00:03:39.116 CXX test/cpp_headers/ioat.o 00:03:39.116 CXX test/cpp_headers/ioat_spec.o 00:03:39.116 LINK fused_ordering 00:03:39.116 CXX test/cpp_headers/iscsi_spec.o 00:03:39.116 LINK doorbell_aers 00:03:39.116 CXX test/cpp_headers/json.o 00:03:39.116 LINK vhost_fuzz 00:03:39.116 LINK simple_copy 00:03:39.116 CXX test/cpp_headers/jsonrpc.o 00:03:39.116 CXX test/cpp_headers/keyring.o 00:03:39.116 CXX test/cpp_headers/keyring_module.o 00:03:39.116 CXX test/cpp_headers/likely.o 00:03:39.116 CXX test/cpp_headers/log.o 00:03:39.116 CXX test/cpp_headers/lvol.o 00:03:39.116 CXX test/cpp_headers/memory.o 00:03:39.383 CXX test/cpp_headers/mmio.o 00:03:39.383 CXX test/cpp_headers/nbd.o 00:03:39.383 CXX test/cpp_headers/notify.o 00:03:39.383 CXX test/cpp_headers/nvme.o 00:03:39.383 CXX test/cpp_headers/nvme_intel.o 00:03:39.383 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.383 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.383 CXX test/cpp_headers/nvme_spec.o 00:03:39.383 CXX test/cpp_headers/nvme_zns.o 00:03:39.383 LINK spdk_bdev 00:03:39.383 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.383 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.383 CXX test/cpp_headers/nvmf.o 00:03:39.383 LINK nvme_compliance 00:03:39.383 CXX test/cpp_headers/nvmf_spec.o 00:03:39.383 CXX test/cpp_headers/nvmf_transport.o 00:03:39.383 CXX test/cpp_headers/opal.o 00:03:39.383 CXX test/cpp_headers/opal_spec.o 00:03:39.383 LINK fdp 00:03:39.383 CXX test/cpp_headers/pci_ids.o 00:03:39.383 CXX test/cpp_headers/pipe.o 00:03:39.383 CXX test/cpp_headers/queue.o 00:03:39.383 LINK pci_ut 00:03:39.383 CXX test/cpp_headers/reduce.o 00:03:39.383 CXX test/cpp_headers/rpc.o 00:03:39.383 CXX test/cpp_headers/scheduler.o 00:03:39.641 CXX test/cpp_headers/scsi.o 00:03:39.641 CXX test/cpp_headers/scsi_spec.o 00:03:39.641 CXX test/cpp_headers/sock.o 00:03:39.641 CXX test/cpp_headers/stdinc.o 00:03:39.641 CXX test/cpp_headers/thread.o 00:03:39.641 CXX test/cpp_headers/string.o 00:03:39.641 CXX test/cpp_headers/trace.o 00:03:39.641 CXX test/cpp_headers/trace_parser.o 00:03:39.641 CXX test/cpp_headers/tree.o 00:03:39.641 CXX test/cpp_headers/ublk.o 00:03:39.641 CXX test/cpp_headers/util.o 00:03:39.641 CXX test/cpp_headers/uuid.o 00:03:39.641 CXX test/cpp_headers/version.o 00:03:39.641 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.641 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.641 CXX test/cpp_headers/vhost.o 00:03:39.641 CXX test/cpp_headers/vmd.o 00:03:39.641 CXX test/cpp_headers/xor.o 00:03:39.641 CXX test/cpp_headers/zipf.o 00:03:39.899 LINK memory_ut 00:03:40.465 LINK iscsi_fuzz 00:03:40.465 LINK cuse 00:03:43.745 LINK esnap 00:03:43.745 00:03:43.745 real 0m40.433s 00:03:43.745 user 7m35.918s 00:03:43.745 sys 1m50.588s 00:03:43.745 23:43:49 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:43.745 23:43:49 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.745 ************************************ 00:03:43.745 END TEST make 00:03:43.745 ************************************ 00:03:43.745 23:43:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.745 23:43:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.745 23:43:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.745 23:43:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.745 23:43:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.745 23:43:49 -- pm/common@44 -- $ pid=547940 00:03:43.745 23:43:49 -- pm/common@50 -- $ kill -TERM 547940 00:03:43.745 23:43:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.745 23:43:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.745 23:43:49 -- pm/common@44 -- $ pid=547942 00:03:43.745 23:43:49 -- pm/common@50 -- $ kill -TERM 547942 00:03:43.745 23:43:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.745 23:43:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:43.745 23:43:49 -- pm/common@44 -- $ pid=547944 00:03:43.745 23:43:49 -- pm/common@50 -- $ kill -TERM 547944 00:03:43.745 23:43:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.745 23:43:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:43.745 23:43:49 -- pm/common@44 -- $ pid=547972 00:03:43.745 23:43:49 -- pm/common@50 -- $ sudo -E kill -TERM 547972 00:03:44.004 23:43:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.004 23:43:49 -- nvmf/common.sh@7 -- # uname -s 00:03:44.004 23:43:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.004 23:43:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.004 23:43:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.004 23:43:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.004 23:43:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.004 23:43:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.004 23:43:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.004 23:43:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.004 23:43:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.004 23:43:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.004 23:43:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:44.004 23:43:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:44.004 23:43:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.004 23:43:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.004 23:43:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:44.004 23:43:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.004 23:43:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.004 23:43:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.004 23:43:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.004 23:43:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.004 23:43:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.004 23:43:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.004 23:43:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.004 23:43:49 -- paths/export.sh@5 -- # export PATH 00:03:44.004 23:43:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.004 23:43:49 -- nvmf/common.sh@47 -- # : 0 00:03:44.004 23:43:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:44.004 23:43:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:44.004 23:43:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.004 23:43:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.004 23:43:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.004 23:43:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:44.004 23:43:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:44.004 23:43:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:44.004 23:43:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.004 23:43:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.004 23:43:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.004 23:43:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.004 23:43:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.004 23:43:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.004 23:43:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.004 23:43:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.004 23:43:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.004 23:43:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.004 23:43:49 -- spdk/autotest.sh@48 -- # udevadm_pid=623480 00:03:44.004 23:43:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.004 23:43:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.004 23:43:49 -- pm/common@17 -- # local monitor 00:03:44.004 23:43:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.004 23:43:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.004 23:43:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.004 23:43:49 -- pm/common@21 -- # date +%s 00:03:44.005 23:43:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.005 23:43:49 -- pm/common@21 -- # date +%s 00:03:44.005 23:43:49 -- pm/common@25 -- # sleep 1 00:03:44.005 23:43:49 -- pm/common@21 -- # date +%s 00:03:44.005 23:43:49 -- pm/common@21 -- # date +%s 00:03:44.005 23:43:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857429 00:03:44.005 23:43:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857429 00:03:44.005 23:43:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857429 00:03:44.005 23:43:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857429 00:03:44.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857429_collect-vmstat.pm.log 00:03:44.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857429_collect-cpu-load.pm.log 00:03:44.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857429_collect-cpu-temp.pm.log 00:03:44.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857429_collect-bmc-pm.bmc.pm.log 00:03:44.938 23:43:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.938 23:43:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.938 23:43:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:44.938 23:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:44.938 23:43:50 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.939 23:43:50 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:44.939 23:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:44.939 23:43:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:44.939 23:43:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.939 23:43:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.939 23:43:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:44.939 23:43:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.939 23:43:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.939 23:43:50 -- common/autotest_common.sh@1451 -- # uname 00:03:44.939 23:43:50 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:44.939 23:43:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.939 23:43:50 -- common/autotest_common.sh@1471 -- # uname 00:03:44.939 23:43:50 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:44.939 23:43:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:44.939 23:43:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:44.939 23:43:50 -- spdk/autotest.sh@72 -- # hash lcov 00:03:44.939 23:43:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:44.939 23:43:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:44.939 --rc lcov_branch_coverage=1 00:03:44.939 --rc lcov_function_coverage=1 00:03:44.939 --rc genhtml_branch_coverage=1 00:03:44.939 --rc genhtml_function_coverage=1 00:03:44.939 --rc genhtml_legend=1 00:03:44.939 --rc geninfo_all_blocks=1 00:03:44.939 ' 00:03:44.939 23:43:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:44.939 --rc lcov_branch_coverage=1 00:03:44.939 --rc lcov_function_coverage=1 00:03:44.939 --rc genhtml_branch_coverage=1 00:03:44.939 --rc genhtml_function_coverage=1 00:03:44.939 --rc genhtml_legend=1 00:03:44.939 --rc geninfo_all_blocks=1 00:03:44.939 ' 00:03:44.939 23:43:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:44.939 --rc lcov_branch_coverage=1 00:03:44.939 --rc lcov_function_coverage=1 00:03:44.939 --rc genhtml_branch_coverage=1 00:03:44.939 --rc genhtml_function_coverage=1 00:03:44.939 --rc genhtml_legend=1 00:03:44.939 --rc geninfo_all_blocks=1 00:03:44.939 --no-external' 00:03:44.939 23:43:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:44.939 --rc lcov_branch_coverage=1 00:03:44.939 --rc lcov_function_coverage=1 00:03:44.939 --rc genhtml_branch_coverage=1 00:03:44.939 --rc genhtml_function_coverage=1 00:03:44.939 --rc genhtml_legend=1 00:03:44.939 --rc geninfo_all_blocks=1 00:03:44.939 --no-external' 00:03:44.939 23:43:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:44.939 lcov: LCOV version 1.14 00:03:44.939 23:43:50 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:14.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:17.986 23:44:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:17.986 23:44:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:17.986 23:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:17.986 23:44:23 -- spdk/autotest.sh@91 -- # rm -f 00:04:17.986 23:44:23 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.362 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:19.362 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:19.362 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:19.362 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:19.362 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:19.362 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:19.362 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:19.362 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:19.362 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:19.362 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:19.362 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:19.362 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:19.362 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:19.362 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:19.362 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:19.362 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:19.362 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:19.620 23:44:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:19.620 23:44:25 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:19.620 23:44:25 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:19.620 23:44:25 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:19.620 23:44:25 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:19.620 23:44:25 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:19.620 23:44:25 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:19.620 23:44:25 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.620 23:44:25 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:19.620 23:44:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:19.620 23:44:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.620 23:44:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:19.620 23:44:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:19.620 23:44:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:19.620 23:44:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.620 No valid GPT data, bailing 00:04:19.620 23:44:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.620 23:44:25 -- scripts/common.sh@391 -- # pt= 00:04:19.620 23:44:25 -- scripts/common.sh@392 -- # return 1 00:04:19.621 23:44:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.621 1+0 records in 00:04:19.621 1+0 records out 00:04:19.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00192609 s, 544 MB/s 00:04:19.621 23:44:25 -- spdk/autotest.sh@118 -- # sync 00:04:19.621 23:44:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.621 23:44:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.621 23:44:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.521 23:44:27 -- spdk/autotest.sh@124 -- # uname -s 00:04:21.521 23:44:27 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:21.521 23:44:27 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.521 23:44:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.521 23:44:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.521 23:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:21.521 ************************************ 00:04:21.521 START TEST setup.sh 00:04:21.521 ************************************ 00:04:21.521 23:44:27 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.521 * Looking for test storage... 00:04:21.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.521 23:44:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:21.521 23:44:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:21.521 23:44:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:21.521 23:44:27 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.521 23:44:27 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.521 23:44:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.783 ************************************ 00:04:21.783 START TEST acl 00:04:21.783 ************************************ 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:21.783 * Looking for test storage... 00:04:21.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.783 23:44:27 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:21.783 23:44:27 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:21.783 23:44:27 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.783 23:44:27 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.158 23:44:28 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:23.158 23:44:28 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:23.158 23:44:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.158 23:44:28 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:23.158 23:44:28 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.158 23:44:28 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.533 Hugepages 00:04:24.533 node hugesize free / total 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 00:04:24.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.533 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:24.534 23:44:29 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:24.534 23:44:29 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.534 23:44:29 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.534 23:44:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 ************************************ 00:04:24.534 START TEST denied 00:04:24.534 ************************************ 00:04:24.534 23:44:30 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:24.534 23:44:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:24.534 23:44:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:24.534 23:44:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:24.534 23:44:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.534 23:44:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.908 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.908 23:44:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.438 00:04:28.438 real 0m3.798s 00:04:28.438 user 0m1.116s 00:04:28.438 sys 0m1.781s 00:04:28.438 23:44:33 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.438 23:44:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:28.438 ************************************ 00:04:28.438 END TEST denied 00:04:28.438 ************************************ 00:04:28.438 23:44:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:28.438 23:44:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.438 23:44:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.438 23:44:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.438 ************************************ 00:04:28.438 START TEST allowed 00:04:28.438 ************************************ 00:04:28.438 23:44:33 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:28.438 23:44:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:04:28.438 23:44:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:28.438 23:44:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:04:28.438 23:44:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.438 23:44:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.394 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.394 23:44:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:30.394 23:44:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:30.394 23:44:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:30.394 23:44:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.394 23:44:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.299 00:04:32.299 real 0m3.810s 00:04:32.299 user 0m0.986s 00:04:32.299 sys 0m1.719s 00:04:32.299 23:44:37 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.299 23:44:37 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:32.299 ************************************ 00:04:32.299 END TEST allowed 00:04:32.299 ************************************ 00:04:32.299 00:04:32.299 real 0m10.434s 00:04:32.299 user 0m3.223s 00:04:32.299 sys 0m5.277s 00:04:32.299 23:44:37 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.299 23:44:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.299 ************************************ 00:04:32.299 END TEST acl 00:04:32.299 ************************************ 00:04:32.299 23:44:37 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.299 23:44:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.299 23:44:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.299 23:44:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.299 ************************************ 00:04:32.299 START TEST hugepages 00:04:32.299 ************************************ 00:04:32.299 23:44:37 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.299 * Looking for test storage... 00:04:32.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 37066676 kB' 'MemAvailable: 41001688 kB' 'Buffers: 2704 kB' 'Cached: 16591124 kB' 'SwapCached: 0 kB' 'Active: 13438908 kB' 'Inactive: 3693412 kB' 'Active(anon): 12999136 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541820 kB' 'Mapped: 177412 kB' 'Shmem: 12460644 kB' 'KReclaimable: 446264 kB' 'Slab: 842264 kB' 'SReclaimable: 446264 kB' 'SUnreclaim: 396000 kB' 'KernelStack: 12880 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 14137376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.299 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.300 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.301 23:44:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:32.301 23:44:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.301 23:44:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.301 23:44:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.301 ************************************ 00:04:32.301 START TEST default_setup 00:04:32.301 ************************************ 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.301 23:44:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.678 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:33.678 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:33.678 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:34.621 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39149652 kB' 'MemAvailable: 43084696 kB' 'Buffers: 2704 kB' 'Cached: 16591220 kB' 'SwapCached: 0 kB' 'Active: 13457400 kB' 'Inactive: 3693412 kB' 'Active(anon): 13017628 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560696 kB' 'Mapped: 177564 kB' 'Shmem: 12460740 kB' 'KReclaimable: 446296 kB' 'Slab: 842108 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395812 kB' 'KernelStack: 12864 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.621 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.622 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39150052 kB' 'MemAvailable: 43085096 kB' 'Buffers: 2704 kB' 'Cached: 16591224 kB' 'SwapCached: 0 kB' 'Active: 13457932 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018160 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561252 kB' 'Mapped: 177564 kB' 'Shmem: 12460744 kB' 'KReclaimable: 446296 kB' 'Slab: 842084 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395788 kB' 'KernelStack: 12864 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.623 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:34.624 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39150344 kB' 'MemAvailable: 43085388 kB' 'Buffers: 2704 kB' 'Cached: 16591224 kB' 'SwapCached: 0 kB' 'Active: 13457020 kB' 'Inactive: 3693412 kB' 'Active(anon): 13017248 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560240 kB' 'Mapped: 177444 kB' 'Shmem: 12460744 kB' 'KReclaimable: 446296 kB' 'Slab: 842092 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395796 kB' 'KernelStack: 12816 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.625 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.626 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.627 nr_hugepages=1024 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.627 resv_hugepages=0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.627 surplus_hugepages=0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.627 anon_hugepages=0 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39150344 kB' 'MemAvailable: 43085388 kB' 'Buffers: 2704 kB' 'Cached: 16591264 kB' 'SwapCached: 0 kB' 'Active: 13457336 kB' 'Inactive: 3693412 kB' 'Active(anon): 13017564 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560556 kB' 'Mapped: 177444 kB' 'Shmem: 12460784 kB' 'KReclaimable: 446296 kB' 'Slab: 842092 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395796 kB' 'KernelStack: 12832 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.627 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.628 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16464744 kB' 'MemUsed: 16365140 kB' 'SwapCached: 0 kB' 'Active: 9784760 kB' 'Inactive: 3337740 kB' 'Active(anon): 9429004 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837944 kB' 'Mapped: 112928 kB' 'AnonPages: 287884 kB' 'Shmem: 9144448 kB' 'KernelStack: 7944 kB' 'PageTables: 5464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164736 kB' 'Slab: 355444 kB' 'SReclaimable: 164736 kB' 'SUnreclaim: 190708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.629 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.630 node0=1024 expecting 1024 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.630 00:04:34.630 real 0m2.442s 00:04:34.630 user 0m0.653s 00:04:34.630 sys 0m0.864s 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.630 23:44:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:34.630 ************************************ 00:04:34.630 END TEST default_setup 00:04:34.630 ************************************ 00:04:34.630 23:44:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:34.630 23:44:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.630 23:44:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.630 23:44:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.889 ************************************ 00:04:34.889 START TEST per_node_1G_alloc 00:04:34.889 ************************************ 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.889 23:44:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.824 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.824 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.824 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.824 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.824 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.824 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.824 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.824 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.824 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.824 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.824 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.824 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.824 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.824 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.824 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.824 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.824 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39133196 kB' 'MemAvailable: 43068240 kB' 'Buffers: 2704 kB' 'Cached: 16591332 kB' 'SwapCached: 0 kB' 'Active: 13458876 kB' 'Inactive: 3693412 kB' 'Active(anon): 13019104 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561580 kB' 'Mapped: 177512 kB' 'Shmem: 12460852 kB' 'KReclaimable: 446296 kB' 'Slab: 842132 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395836 kB' 'KernelStack: 12800 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.088 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39137300 kB' 'MemAvailable: 43072344 kB' 'Buffers: 2704 kB' 'Cached: 16591348 kB' 'SwapCached: 0 kB' 'Active: 13457916 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018144 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560612 kB' 'Mapped: 177452 kB' 'Shmem: 12460868 kB' 'KReclaimable: 446296 kB' 'Slab: 842128 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395832 kB' 'KernelStack: 12832 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.089 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.090 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39137620 kB' 'MemAvailable: 43072664 kB' 'Buffers: 2704 kB' 'Cached: 16591352 kB' 'SwapCached: 0 kB' 'Active: 13457904 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018132 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560640 kB' 'Mapped: 177452 kB' 'Shmem: 12460872 kB' 'KReclaimable: 446296 kB' 'Slab: 842128 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395832 kB' 'KernelStack: 12848 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.091 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.092 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.094 nr_hugepages=1024 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.094 resv_hugepages=0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.094 surplus_hugepages=0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.094 anon_hugepages=0 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39138504 kB' 'MemAvailable: 43073548 kB' 'Buffers: 2704 kB' 'Cached: 16591376 kB' 'SwapCached: 0 kB' 'Active: 13457968 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018196 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560640 kB' 'Mapped: 177452 kB' 'Shmem: 12460896 kB' 'KReclaimable: 446296 kB' 'Slab: 842128 kB' 'SReclaimable: 446296 kB' 'SUnreclaim: 395832 kB' 'KernelStack: 12848 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.094 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.095 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17495760 kB' 'MemUsed: 15334124 kB' 'SwapCached: 0 kB' 'Active: 9784884 kB' 'Inactive: 3337740 kB' 'Active(anon): 9429128 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837948 kB' 'Mapped: 112936 kB' 'AnonPages: 287848 kB' 'Shmem: 9144452 kB' 'KernelStack: 7976 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164736 kB' 'Slab: 355544 kB' 'SReclaimable: 164736 kB' 'SUnreclaim: 190808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.096 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:36.097 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21642932 kB' 'MemUsed: 6068892 kB' 'SwapCached: 0 kB' 'Active: 3673100 kB' 'Inactive: 355672 kB' 'Active(anon): 3589084 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3756176 kB' 'Mapped: 64516 kB' 'AnonPages: 272788 kB' 'Shmem: 3316488 kB' 'KernelStack: 4872 kB' 'PageTables: 2688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281560 kB' 'Slab: 486584 kB' 'SReclaimable: 281560 kB' 'SUnreclaim: 205024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.357 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.358 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.359 node0=512 expecting 512 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:36.359 node1=512 expecting 512 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.359 00:04:36.359 real 0m1.479s 00:04:36.359 user 0m0.600s 00:04:36.359 sys 0m0.842s 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.359 23:44:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.359 ************************************ 00:04:36.359 END TEST per_node_1G_alloc 00:04:36.359 ************************************ 00:04:36.359 23:44:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:36.359 23:44:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.359 23:44:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.359 23:44:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.359 ************************************ 00:04:36.359 START TEST even_2G_alloc 00:04:36.359 ************************************ 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.359 23:44:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.291 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.291 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.291 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.291 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.291 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.291 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.291 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.291 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.291 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.291 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.291 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.291 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.291 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.291 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.291 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.291 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.291 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39138972 kB' 'MemAvailable: 43074008 kB' 'Buffers: 2704 kB' 'Cached: 16591476 kB' 'SwapCached: 0 kB' 'Active: 13458416 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018644 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560924 kB' 'Mapped: 177500 kB' 'Shmem: 12460996 kB' 'KReclaimable: 446288 kB' 'Slab: 842016 kB' 'SReclaimable: 446288 kB' 'SUnreclaim: 395728 kB' 'KernelStack: 12816 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.554 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.555 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39139172 kB' 'MemAvailable: 43074208 kB' 'Buffers: 2704 kB' 'Cached: 16591476 kB' 'SwapCached: 0 kB' 'Active: 13458084 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018312 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560620 kB' 'Mapped: 177464 kB' 'Shmem: 12460996 kB' 'KReclaimable: 446288 kB' 'Slab: 842040 kB' 'SReclaimable: 446288 kB' 'SUnreclaim: 395752 kB' 'KernelStack: 12848 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.557 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39138416 kB' 'MemAvailable: 43073452 kB' 'Buffers: 2704 kB' 'Cached: 16591496 kB' 'SwapCached: 0 kB' 'Active: 13458584 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018812 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561172 kB' 'Mapped: 177900 kB' 'Shmem: 12461016 kB' 'KReclaimable: 446288 kB' 'Slab: 842040 kB' 'SReclaimable: 446288 kB' 'SUnreclaim: 395752 kB' 'KernelStack: 12896 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14161364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.558 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.559 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.560 nr_hugepages=1024 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.560 resv_hugepages=0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.560 surplus_hugepages=0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.560 anon_hugepages=0 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39136740 kB' 'MemAvailable: 43071776 kB' 'Buffers: 2704 kB' 'Cached: 16591520 kB' 'SwapCached: 0 kB' 'Active: 13458108 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018336 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560668 kB' 'Mapped: 177464 kB' 'Shmem: 12461040 kB' 'KReclaimable: 446288 kB' 'Slab: 842032 kB' 'SReclaimable: 446288 kB' 'SUnreclaim: 395744 kB' 'KernelStack: 12800 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14158964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.560 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.561 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17499388 kB' 'MemUsed: 15330496 kB' 'SwapCached: 0 kB' 'Active: 9785656 kB' 'Inactive: 3337740 kB' 'Active(anon): 9429900 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12838032 kB' 'Mapped: 112948 kB' 'AnonPages: 288552 kB' 'Shmem: 9144536 kB' 'KernelStack: 8008 kB' 'PageTables: 5508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164736 kB' 'Slab: 355460 kB' 'SReclaimable: 164736 kB' 'SUnreclaim: 190724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.562 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.563 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21637936 kB' 'MemUsed: 6073888 kB' 'SwapCached: 0 kB' 'Active: 3672372 kB' 'Inactive: 355672 kB' 'Active(anon): 3588356 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3756212 kB' 'Mapped: 64516 kB' 'AnonPages: 272032 kB' 'Shmem: 3316524 kB' 'KernelStack: 4824 kB' 'PageTables: 2548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281552 kB' 'Slab: 486564 kB' 'SReclaimable: 281552 kB' 'SUnreclaim: 205012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.564 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.565 node0=512 expecting 512 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:37.565 node1=512 expecting 512 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:37.565 00:04:37.565 real 0m1.371s 00:04:37.565 user 0m0.562s 00:04:37.565 sys 0m0.768s 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.565 23:44:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.565 ************************************ 00:04:37.565 END TEST even_2G_alloc 00:04:37.565 ************************************ 00:04:37.565 23:44:43 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.565 23:44:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.565 23:44:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.565 23:44:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.822 ************************************ 00:04:37.822 START TEST odd_alloc 00:04:37.822 ************************************ 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.822 23:44:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.754 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.754 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.754 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.754 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.754 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.754 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.754 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.754 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.754 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.017 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39139772 kB' 'MemAvailable: 43074792 kB' 'Buffers: 2704 kB' 'Cached: 16591604 kB' 'SwapCached: 0 kB' 'Active: 13454920 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015148 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557224 kB' 'Mapped: 177032 kB' 'Shmem: 12461124 kB' 'KReclaimable: 446272 kB' 'Slab: 841756 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395484 kB' 'KernelStack: 12736 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 14146016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.018 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39141564 kB' 'MemAvailable: 43076584 kB' 'Buffers: 2704 kB' 'Cached: 16591604 kB' 'SwapCached: 0 kB' 'Active: 13458732 kB' 'Inactive: 3693412 kB' 'Active(anon): 13018960 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561076 kB' 'Mapped: 177468 kB' 'Shmem: 12461124 kB' 'KReclaimable: 446272 kB' 'Slab: 841780 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395508 kB' 'KernelStack: 13136 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 14149368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197320 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.019 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.020 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39136684 kB' 'MemAvailable: 43071704 kB' 'Buffers: 2704 kB' 'Cached: 16591624 kB' 'SwapCached: 0 kB' 'Active: 13461312 kB' 'Inactive: 3693412 kB' 'Active(anon): 13021540 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564028 kB' 'Mapped: 176872 kB' 'Shmem: 12461144 kB' 'KReclaimable: 446272 kB' 'Slab: 841796 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395524 kB' 'KernelStack: 13056 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 14152172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.021 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.022 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:39.023 nr_hugepages=1025 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.023 resv_hugepages=0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.023 surplus_hugepages=0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.023 anon_hugepages=0 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39138572 kB' 'MemAvailable: 43073592 kB' 'Buffers: 2704 kB' 'Cached: 16591644 kB' 'SwapCached: 0 kB' 'Active: 13455868 kB' 'Inactive: 3693412 kB' 'Active(anon): 13016096 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558064 kB' 'Mapped: 176836 kB' 'Shmem: 12461164 kB' 'KReclaimable: 446272 kB' 'Slab: 841796 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395524 kB' 'KernelStack: 13184 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 14146072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197336 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.023 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.024 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17498516 kB' 'MemUsed: 15331368 kB' 'SwapCached: 0 kB' 'Active: 9781436 kB' 'Inactive: 3337740 kB' 'Active(anon): 9425680 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12838160 kB' 'Mapped: 112380 kB' 'AnonPages: 284128 kB' 'Shmem: 9144664 kB' 'KernelStack: 7928 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164728 kB' 'Slab: 355284 kB' 'SReclaimable: 164728 kB' 'SUnreclaim: 190556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.025 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21641672 kB' 'MemUsed: 6070152 kB' 'SwapCached: 0 kB' 'Active: 3672940 kB' 'Inactive: 355672 kB' 'Active(anon): 3588924 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3756228 kB' 'Mapped: 64056 kB' 'AnonPages: 272508 kB' 'Shmem: 3316540 kB' 'KernelStack: 4824 kB' 'PageTables: 2608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281544 kB' 'Slab: 486512 kB' 'SReclaimable: 281544 kB' 'SUnreclaim: 204968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.026 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.027 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.028 23:44:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.286 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.286 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:39.287 node0=512 expecting 513 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:39.287 node1=513 expecting 512 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:39.287 00:04:39.287 real 0m1.441s 00:04:39.287 user 0m0.623s 00:04:39.287 sys 0m0.780s 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.287 23:44:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.287 ************************************ 00:04:39.287 END TEST odd_alloc 00:04:39.287 ************************************ 00:04:39.287 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:39.287 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.287 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.287 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.287 ************************************ 00:04:39.287 START TEST custom_alloc 00:04:39.287 ************************************ 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.287 23:44:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.222 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.222 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.222 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.222 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.222 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.222 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.222 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.222 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.222 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.222 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.222 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.222 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.222 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.222 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.222 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.222 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.222 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38072500 kB' 'MemAvailable: 42007520 kB' 'Buffers: 2704 kB' 'Cached: 16591736 kB' 'SwapCached: 0 kB' 'Active: 13454200 kB' 'Inactive: 3693412 kB' 'Active(anon): 13014428 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556464 kB' 'Mapped: 176496 kB' 'Shmem: 12461256 kB' 'KReclaimable: 446272 kB' 'Slab: 841704 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395432 kB' 'KernelStack: 12800 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 14143912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.486 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.487 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38075884 kB' 'MemAvailable: 42010904 kB' 'Buffers: 2704 kB' 'Cached: 16591740 kB' 'SwapCached: 0 kB' 'Active: 13454596 kB' 'Inactive: 3693412 kB' 'Active(anon): 13014824 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556912 kB' 'Mapped: 176536 kB' 'Shmem: 12461260 kB' 'KReclaimable: 446272 kB' 'Slab: 841688 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395416 kB' 'KernelStack: 12816 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 14143932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.488 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.489 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38075632 kB' 'MemAvailable: 42010652 kB' 'Buffers: 2704 kB' 'Cached: 16591752 kB' 'SwapCached: 0 kB' 'Active: 13454424 kB' 'Inactive: 3693412 kB' 'Active(anon): 13014652 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556716 kB' 'Mapped: 176536 kB' 'Shmem: 12461272 kB' 'KReclaimable: 446272 kB' 'Slab: 841688 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395416 kB' 'KernelStack: 12816 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 14143952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.490 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:40.491 nr_hugepages=1536 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.491 resv_hugepages=0 00:04:40.491 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.491 surplus_hugepages=0 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.492 anon_hugepages=0 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38075380 kB' 'MemAvailable: 42010400 kB' 'Buffers: 2704 kB' 'Cached: 16591776 kB' 'SwapCached: 0 kB' 'Active: 13454536 kB' 'Inactive: 3693412 kB' 'Active(anon): 13014764 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556792 kB' 'Mapped: 176456 kB' 'Shmem: 12461296 kB' 'KReclaimable: 446272 kB' 'Slab: 841688 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395416 kB' 'KernelStack: 12832 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 14143972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.492 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.493 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17488024 kB' 'MemUsed: 15341860 kB' 'SwapCached: 0 kB' 'Active: 9782208 kB' 'Inactive: 3337740 kB' 'Active(anon): 9426452 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12838280 kB' 'Mapped: 112400 kB' 'AnonPages: 284840 kB' 'Shmem: 9144784 kB' 'KernelStack: 8008 kB' 'PageTables: 5256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164728 kB' 'Slab: 355236 kB' 'SReclaimable: 164728 kB' 'SUnreclaim: 190508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.494 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20586348 kB' 'MemUsed: 7125476 kB' 'SwapCached: 0 kB' 'Active: 3672364 kB' 'Inactive: 355672 kB' 'Active(anon): 3588348 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3756224 kB' 'Mapped: 64056 kB' 'AnonPages: 271952 kB' 'Shmem: 3316536 kB' 'KernelStack: 4824 kB' 'PageTables: 2536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281544 kB' 'Slab: 486452 kB' 'SReclaimable: 281544 kB' 'SUnreclaim: 204908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.495 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.496 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.497 node0=512 expecting 512 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:40.497 node1=1024 expecting 1024 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:40.497 00:04:40.497 real 0m1.415s 00:04:40.497 user 0m0.615s 00:04:40.497 sys 0m0.760s 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.497 23:44:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.497 ************************************ 00:04:40.497 END TEST custom_alloc 00:04:40.497 ************************************ 00:04:40.754 23:44:46 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:40.754 23:44:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.755 23:44:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.755 23:44:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.755 ************************************ 00:04:40.755 START TEST no_shrink_alloc 00:04:40.755 ************************************ 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.755 23:44:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.689 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.689 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.689 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.690 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.690 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.690 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.690 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.690 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.690 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.690 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.690 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.690 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.690 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.690 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.690 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.690 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.690 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39135796 kB' 'MemAvailable: 43070816 kB' 'Buffers: 2704 kB' 'Cached: 16591868 kB' 'SwapCached: 0 kB' 'Active: 13455144 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015372 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557208 kB' 'Mapped: 176520 kB' 'Shmem: 12461388 kB' 'KReclaimable: 446272 kB' 'Slab: 841600 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395328 kB' 'KernelStack: 12832 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39136772 kB' 'MemAvailable: 43071792 kB' 'Buffers: 2704 kB' 'Cached: 16591868 kB' 'SwapCached: 0 kB' 'Active: 13454916 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015144 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556968 kB' 'Mapped: 176480 kB' 'Shmem: 12461388 kB' 'KReclaimable: 446272 kB' 'Slab: 841596 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395324 kB' 'KernelStack: 12848 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39136804 kB' 'MemAvailable: 43071824 kB' 'Buffers: 2704 kB' 'Cached: 16591888 kB' 'SwapCached: 0 kB' 'Active: 13455000 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015228 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557016 kB' 'Mapped: 176480 kB' 'Shmem: 12461408 kB' 'KReclaimable: 446272 kB' 'Slab: 841612 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395340 kB' 'KernelStack: 12864 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.962 nr_hugepages=1024 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.962 resv_hugepages=0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.962 surplus_hugepages=0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.962 anon_hugepages=0 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39136552 kB' 'MemAvailable: 43071572 kB' 'Buffers: 2704 kB' 'Cached: 16591908 kB' 'SwapCached: 0 kB' 'Active: 13454964 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015192 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556972 kB' 'Mapped: 176480 kB' 'Shmem: 12461428 kB' 'KReclaimable: 446272 kB' 'Slab: 841612 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395340 kB' 'KernelStack: 12864 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16444120 kB' 'MemUsed: 16385764 kB' 'SwapCached: 0 kB' 'Active: 9782320 kB' 'Inactive: 3337740 kB' 'Active(anon): 9426564 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12838336 kB' 'Mapped: 112424 kB' 'AnonPages: 284796 kB' 'Shmem: 9144840 kB' 'KernelStack: 8008 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164728 kB' 'Slab: 355124 kB' 'SReclaimable: 164728 kB' 'SUnreclaim: 190396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.966 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.967 node0=1024 expecting 1024 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.967 23:44:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.343 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.343 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.343 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.343 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.343 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.343 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.343 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.343 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.343 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.343 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.343 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.343 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.343 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.343 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.343 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.343 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.343 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.343 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:43.343 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:43.343 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.343 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39144496 kB' 'MemAvailable: 43079516 kB' 'Buffers: 2704 kB' 'Cached: 16591976 kB' 'SwapCached: 0 kB' 'Active: 13455660 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015888 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557592 kB' 'Mapped: 176620 kB' 'Shmem: 12461496 kB' 'KReclaimable: 446272 kB' 'Slab: 841908 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395636 kB' 'KernelStack: 12896 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39145832 kB' 'MemAvailable: 43080852 kB' 'Buffers: 2704 kB' 'Cached: 16591980 kB' 'SwapCached: 0 kB' 'Active: 13455236 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015464 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557156 kB' 'Mapped: 176560 kB' 'Shmem: 12461500 kB' 'KReclaimable: 446272 kB' 'Slab: 841908 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395636 kB' 'KernelStack: 12864 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39146120 kB' 'MemAvailable: 43081140 kB' 'Buffers: 2704 kB' 'Cached: 16592000 kB' 'SwapCached: 0 kB' 'Active: 13455096 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015324 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557036 kB' 'Mapped: 176484 kB' 'Shmem: 12461520 kB' 'KReclaimable: 446272 kB' 'Slab: 841892 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395620 kB' 'KernelStack: 12880 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.348 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.349 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.350 nr_hugepages=1024 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.350 resv_hugepages=0 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.350 surplus_hugepages=0 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.350 anon_hugepages=0 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39146120 kB' 'MemAvailable: 43081140 kB' 'Buffers: 2704 kB' 'Cached: 16592000 kB' 'SwapCached: 0 kB' 'Active: 13454796 kB' 'Inactive: 3693412 kB' 'Active(anon): 13015024 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556736 kB' 'Mapped: 176484 kB' 'Shmem: 12461520 kB' 'KReclaimable: 446272 kB' 'Slab: 841892 kB' 'SReclaimable: 446272 kB' 'SUnreclaim: 395620 kB' 'KernelStack: 12864 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 14144636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43008 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1836636 kB' 'DirectMap2M: 19054592 kB' 'DirectMap1G: 48234496 kB' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16443196 kB' 'MemUsed: 16386688 kB' 'SwapCached: 0 kB' 'Active: 9781920 kB' 'Inactive: 3337740 kB' 'Active(anon): 9426164 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12838336 kB' 'Mapped: 112428 kB' 'AnonPages: 284432 kB' 'Shmem: 9144840 kB' 'KernelStack: 8024 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164728 kB' 'Slab: 355216 kB' 'SReclaimable: 164728 kB' 'SUnreclaim: 190488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.353 node0=1024 expecting 1024 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.353 00:04:43.353 real 0m2.735s 00:04:43.353 user 0m1.084s 00:04:43.353 sys 0m1.569s 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.353 23:44:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.353 ************************************ 00:04:43.353 END TEST no_shrink_alloc 00:04:43.353 ************************************ 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.353 23:44:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.353 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.353 00:04:43.353 real 0m11.283s 00:04:43.353 user 0m4.298s 00:04:43.353 sys 0m5.845s 00:04:43.353 23:44:49 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.353 23:44:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.353 ************************************ 00:04:43.353 END TEST hugepages 00:04:43.354 ************************************ 00:04:43.354 23:44:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:43.354 23:44:49 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.354 23:44:49 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.354 23:44:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.354 ************************************ 00:04:43.354 START TEST driver 00:04:43.354 ************************************ 00:04:43.354 23:44:49 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:43.609 * Looking for test storage... 00:04:43.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:43.609 23:44:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:43.609 23:44:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.610 23:44:49 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.138 23:44:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:46.138 23:44:51 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.138 23:44:51 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.138 23:44:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:46.138 ************************************ 00:04:46.138 START TEST guess_driver 00:04:46.138 ************************************ 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:46.138 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:46.138 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:46.139 Looking for driver=vfio-pci 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.139 23:44:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.100 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.359 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.360 23:44:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.292 23:44:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.823 00:04:50.823 real 0m4.754s 00:04:50.823 user 0m1.055s 00:04:50.823 sys 0m1.811s 00:04:50.823 23:44:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.823 23:44:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.823 ************************************ 00:04:50.823 END TEST guess_driver 00:04:50.823 ************************************ 00:04:50.823 00:04:50.823 real 0m7.250s 00:04:50.823 user 0m1.614s 00:04:50.823 sys 0m2.770s 00:04:50.823 23:44:56 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.823 23:44:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.823 ************************************ 00:04:50.823 END TEST driver 00:04:50.823 ************************************ 00:04:50.823 23:44:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.823 23:44:56 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.823 23:44:56 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.823 23:44:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.823 ************************************ 00:04:50.823 START TEST devices 00:04:50.823 ************************************ 00:04:50.823 23:44:56 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.823 * Looking for test storage... 00:04:50.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.823 23:44:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.823 23:44:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.823 23:44:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.823 23:44:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:52.197 23:44:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.197 No valid GPT data, bailing 00:04:52.197 23:44:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.197 23:44:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.197 23:44:57 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.197 23:44:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.197 ************************************ 00:04:52.197 START TEST nvme_mount 00:04:52.197 ************************************ 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.197 23:44:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.570 Creating new GPT entries in memory. 00:04:53.570 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.570 other utilities. 00:04:53.570 23:44:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.570 23:44:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.570 23:44:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.570 23:44:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.570 23:44:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.508 Creating new GPT entries in memory. 00:04:54.508 The operation has completed successfully. 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 643663 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.508 23:44:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.443 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.702 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.702 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.960 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:55.960 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:55.960 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.960 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.960 23:45:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.893 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:56.894 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.152 23:45:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.525 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.526 23:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.526 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.526 00:04:58.526 real 0m6.286s 00:04:58.526 user 0m1.456s 00:04:58.526 sys 0m2.403s 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.526 23:45:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.526 ************************************ 00:04:58.526 END TEST nvme_mount 00:04:58.526 ************************************ 00:04:58.526 23:45:04 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.526 23:45:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.526 23:45:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.526 23:45:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.526 ************************************ 00:04:58.526 START TEST dm_mount 00:04:58.526 ************************************ 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.526 23:45:04 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.902 Creating new GPT entries in memory. 00:04:59.902 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.902 other utilities. 00:04:59.902 23:45:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.902 23:45:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.902 23:45:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.902 23:45:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.902 23:45:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.838 Creating new GPT entries in memory. 00:05:00.838 The operation has completed successfully. 00:05:00.838 23:45:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.838 23:45:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.838 23:45:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.838 23:45:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.838 23:45:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:01.774 The operation has completed successfully. 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 646160 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.774 23:45:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.708 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.966 23:45:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.901 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.161 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.161 00:05:04.161 real 0m5.588s 00:05:04.161 user 0m0.884s 00:05:04.161 sys 0m1.538s 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.161 23:45:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.161 ************************************ 00:05:04.161 END TEST dm_mount 00:05:04.161 ************************************ 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.161 23:45:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.420 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.420 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.420 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.420 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.420 23:45:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.420 00:05:04.420 real 0m13.760s 00:05:04.420 user 0m2.990s 00:05:04.420 sys 0m4.940s 00:05:04.420 23:45:10 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.420 23:45:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:04.420 ************************************ 00:05:04.420 END TEST devices 00:05:04.420 ************************************ 00:05:04.420 00:05:04.420 real 0m42.961s 00:05:04.420 user 0m12.222s 00:05:04.420 sys 0m18.986s 00:05:04.420 23:45:10 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.420 23:45:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.420 ************************************ 00:05:04.420 END TEST setup.sh 00:05:04.420 ************************************ 00:05:04.679 23:45:10 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.615 Hugepages 00:05:05.615 node hugesize free / total 00:05:05.615 node0 1048576kB 0 / 0 00:05:05.615 node0 2048kB 2048 / 2048 00:05:05.615 node1 1048576kB 0 / 0 00:05:05.615 node1 2048kB 0 / 0 00:05:05.615 00:05:05.615 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.615 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:05.615 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:05.873 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:05.873 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:05.873 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:05.873 23:45:11 -- spdk/autotest.sh@130 -- # uname -s 00:05:05.873 23:45:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:05.873 23:45:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:05.873 23:45:11 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.807 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:06.807 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:06.807 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:06.807 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:06.807 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.065 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.066 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.066 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.066 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:08.003 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.003 23:45:13 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:08.980 23:45:14 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:08.980 23:45:14 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:08.980 23:45:14 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:08.980 23:45:14 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:08.980 23:45:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:08.980 23:45:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:08.980 23:45:14 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.980 23:45:14 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.980 23:45:14 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:09.238 23:45:14 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:09.238 23:45:14 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:05:09.238 23:45:14 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.170 Waiting for block devices as requested 00:05:10.170 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:10.170 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:10.427 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:10.427 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:10.427 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:10.427 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:10.685 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:10.685 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:10.685 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:05:10.943 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:10.943 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:10.943 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:11.200 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:11.200 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:11.200 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:11.200 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:11.459 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:11.459 23:45:17 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:11.459 23:45:17 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1498 -- # grep 0000:0b:00.0/nvme/nvme 00:05:11.459 23:45:17 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:05:11.459 23:45:17 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:11.459 23:45:17 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:11.459 23:45:17 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:11.459 23:45:17 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:11.459 23:45:17 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:11.459 23:45:17 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:11.459 23:45:17 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:11.459 23:45:17 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:11.459 23:45:17 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:11.459 23:45:17 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:11.459 23:45:17 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:11.459 23:45:17 -- common/autotest_common.sh@1553 -- # continue 00:05:11.459 23:45:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:11.459 23:45:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.459 23:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:11.459 23:45:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:11.459 23:45:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:11.459 23:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:11.459 23:45:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.832 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.832 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.832 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.765 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.765 23:45:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:13.765 23:45:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.765 23:45:19 -- common/autotest_common.sh@10 -- # set +x 00:05:13.765 23:45:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:13.765 23:45:19 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:13.765 23:45:19 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.765 23:45:19 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:13.765 23:45:19 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:13.765 23:45:19 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:13.765 23:45:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:13.765 23:45:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:13.765 23:45:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.765 23:45:19 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.765 23:45:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:13.765 23:45:19 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:13.765 23:45:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:05:13.765 23:45:19 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:13.765 23:45:19 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:05:13.766 23:45:19 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:13.766 23:45:19 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:13.766 23:45:19 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:13.766 23:45:19 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:0b:00.0 00:05:13.766 23:45:19 -- common/autotest_common.sh@1588 -- # [[ -z 0000:0b:00.0 ]] 00:05:13.766 23:45:19 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=651839 00:05:13.766 23:45:19 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.766 23:45:19 -- common/autotest_common.sh@1594 -- # waitforlisten 651839 00:05:13.766 23:45:19 -- common/autotest_common.sh@827 -- # '[' -z 651839 ']' 00:05:13.766 23:45:19 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.766 23:45:19 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.766 23:45:19 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.766 23:45:19 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.766 23:45:19 -- common/autotest_common.sh@10 -- # set +x 00:05:14.023 [2024-07-24 23:45:19.515484] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:14.023 [2024-07-24 23:45:19.515566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651839 ] 00:05:14.023 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.023 [2024-07-24 23:45:19.579146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.023 [2024-07-24 23:45:19.675686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.280 23:45:19 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.280 23:45:19 -- common/autotest_common.sh@860 -- # return 0 00:05:14.280 23:45:19 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:14.280 23:45:19 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:14.280 23:45:19 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:17.563 nvme0n1 00:05:17.563 23:45:23 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.563 [2024-07-24 23:45:23.254032] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:17.563 [2024-07-24 23:45:23.254075] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:17.563 request: 00:05:17.563 { 00:05:17.563 "nvme_ctrlr_name": "nvme0", 00:05:17.563 "password": "test", 00:05:17.563 "method": "bdev_nvme_opal_revert", 00:05:17.563 "req_id": 1 00:05:17.563 } 00:05:17.563 Got JSON-RPC error response 00:05:17.563 response: 00:05:17.563 { 00:05:17.563 "code": -32603, 00:05:17.563 "message": "Internal error" 00:05:17.563 } 00:05:17.563 23:45:23 -- common/autotest_common.sh@1600 -- # true 00:05:17.563 23:45:23 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:17.563 23:45:23 -- common/autotest_common.sh@1604 -- # killprocess 651839 00:05:17.563 23:45:23 -- common/autotest_common.sh@946 -- # '[' -z 651839 ']' 00:05:17.563 23:45:23 -- common/autotest_common.sh@950 -- # kill -0 651839 00:05:17.563 23:45:23 -- common/autotest_common.sh@951 -- # uname 00:05:17.563 23:45:23 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.563 23:45:23 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 651839 00:05:17.823 23:45:23 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.823 23:45:23 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.823 23:45:23 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 651839' 00:05:17.823 killing process with pid 651839 00:05:17.823 23:45:23 -- common/autotest_common.sh@965 -- # kill 651839 00:05:17.823 23:45:23 -- common/autotest_common.sh@970 -- # wait 651839 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.823 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.824 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.825 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:19.724 23:45:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:19.724 23:45:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:19.724 23:45:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.724 23:45:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.724 23:45:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:19.724 23:45:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.724 23:45:25 -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 23:45:25 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:19.724 23:45:25 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.724 23:45:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.724 23:45:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.724 23:45:25 -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 START TEST env 00:05:19.724 ************************************ 00:05:19.724 23:45:25 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.724 * Looking for test storage... 00:05:19.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:19.724 23:45:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.724 23:45:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.724 23:45:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.724 23:45:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 START TEST env_memory 00:05:19.724 ************************************ 00:05:19.724 23:45:25 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.724 00:05:19.724 00:05:19.724 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.724 http://cunit.sourceforge.net/ 00:05:19.724 00:05:19.724 00:05:19.724 Suite: memory 00:05:19.724 Test: alloc and free memory map ...[2024-07-24 23:45:25.183405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.724 passed 00:05:19.724 Test: mem map translation ...[2024-07-24 23:45:25.204747] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.724 [2024-07-24 23:45:25.204768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.724 [2024-07-24 23:45:25.204824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.724 [2024-07-24 23:45:25.204836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.724 passed 00:05:19.724 Test: mem map registration ...[2024-07-24 23:45:25.246032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.724 [2024-07-24 23:45:25.246054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.724 passed 00:05:19.724 Test: mem map adjacent registrations ...passed 00:05:19.724 00:05:19.724 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.724 suites 1 1 n/a 0 0 00:05:19.724 tests 4 4 4 0 0 00:05:19.724 asserts 152 152 152 0 n/a 00:05:19.724 00:05:19.724 Elapsed time = 0.143 seconds 00:05:19.724 00:05:19.724 real 0m0.151s 00:05:19.724 user 0m0.141s 00:05:19.724 sys 0m0.010s 00:05:19.724 23:45:25 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.724 23:45:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 END TEST env_memory 00:05:19.724 ************************************ 00:05:19.724 23:45:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.724 23:45:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.724 23:45:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.724 23:45:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 START TEST env_vtophys 00:05:19.724 ************************************ 00:05:19.724 23:45:25 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.724 EAL: lib.eal log level changed from notice to debug 00:05:19.724 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.724 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.724 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.724 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.724 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.724 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.724 EAL: Detected lcore 6 as core 8 on socket 0 00:05:19.724 EAL: Detected lcore 7 as core 9 on socket 0 00:05:19.724 EAL: Detected lcore 8 as core 10 on socket 0 00:05:19.724 EAL: Detected lcore 9 as core 11 on socket 0 00:05:19.724 EAL: Detected lcore 10 as core 12 on socket 0 00:05:19.724 EAL: Detected lcore 11 as core 13 on socket 0 00:05:19.724 EAL: Detected lcore 12 as core 0 on socket 1 00:05:19.724 EAL: Detected lcore 13 as core 1 on socket 1 00:05:19.724 EAL: Detected lcore 14 as core 2 on socket 1 00:05:19.724 EAL: Detected lcore 15 as core 3 on socket 1 00:05:19.724 EAL: Detected lcore 16 as core 4 on socket 1 00:05:19.724 EAL: Detected lcore 17 as core 5 on socket 1 00:05:19.724 EAL: Detected lcore 18 as core 8 on socket 1 00:05:19.724 EAL: Detected lcore 19 as core 9 on socket 1 00:05:19.724 EAL: Detected lcore 20 as core 10 on socket 1 00:05:19.724 EAL: Detected lcore 21 as core 11 on socket 1 00:05:19.724 EAL: Detected lcore 22 as core 12 on socket 1 00:05:19.724 EAL: Detected lcore 23 as core 13 on socket 1 00:05:19.724 EAL: Detected lcore 24 as core 0 on socket 0 00:05:19.724 EAL: Detected lcore 25 as core 1 on socket 0 00:05:19.724 EAL: Detected lcore 26 as core 2 on socket 0 00:05:19.724 EAL: Detected lcore 27 as core 3 on socket 0 00:05:19.724 EAL: Detected lcore 28 as core 4 on socket 0 00:05:19.724 EAL: Detected lcore 29 as core 5 on socket 0 00:05:19.724 EAL: Detected lcore 30 as core 8 on socket 0 00:05:19.724 EAL: Detected lcore 31 as core 9 on socket 0 00:05:19.724 EAL: Detected lcore 32 as core 10 on socket 0 00:05:19.724 EAL: Detected lcore 33 as core 11 on socket 0 00:05:19.724 EAL: Detected lcore 34 as core 12 on socket 0 00:05:19.724 EAL: Detected lcore 35 as core 13 on socket 0 00:05:19.724 EAL: Detected lcore 36 as core 0 on socket 1 00:05:19.724 EAL: Detected lcore 37 as core 1 on socket 1 00:05:19.724 EAL: Detected lcore 38 as core 2 on socket 1 00:05:19.724 EAL: Detected lcore 39 as core 3 on socket 1 00:05:19.724 EAL: Detected lcore 40 as core 4 on socket 1 00:05:19.724 EAL: Detected lcore 41 as core 5 on socket 1 00:05:19.724 EAL: Detected lcore 42 as core 8 on socket 1 00:05:19.724 EAL: Detected lcore 43 as core 9 on socket 1 00:05:19.724 EAL: Detected lcore 44 as core 10 on socket 1 00:05:19.724 EAL: Detected lcore 45 as core 11 on socket 1 00:05:19.724 EAL: Detected lcore 46 as core 12 on socket 1 00:05:19.724 EAL: Detected lcore 47 as core 13 on socket 1 00:05:19.724 EAL: Maximum logical cores by configuration: 128 00:05:19.724 EAL: Detected CPU lcores: 48 00:05:19.724 EAL: Detected NUMA nodes: 2 00:05:19.724 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:19.724 EAL: Detected shared linkage of DPDK 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:19.724 EAL: Registered [vdev] bus. 00:05:19.724 EAL: bus.vdev log level changed from disabled to notice 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:19.724 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.724 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:19.724 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:19.725 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:19.725 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:19.725 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Bus pci wants IOVA as 'DC' 00:05:19.725 EAL: Bus vdev wants IOVA as 'DC' 00:05:19.725 EAL: Buses did not request a specific IOVA mode. 00:05:19.725 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.725 EAL: Selected IOVA mode 'VA' 00:05:19.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.725 EAL: Probing VFIO support... 00:05:19.725 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.725 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.725 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.725 EAL: VFIO support initialized 00:05:19.725 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.725 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.725 EAL: Setting up physically contiguous memory... 00:05:19.725 EAL: Setting maximum number of open files to 524288 00:05:19.725 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.725 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.725 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.725 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.725 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.725 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.725 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.725 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.725 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.725 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.725 EAL: Hugepages will be freed exactly as allocated. 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: TSC frequency is ~2700000 KHz 00:05:19.725 EAL: Main lcore 0 is ready (tid=7f932e3d6a00;cpuset=[0]) 00:05:19.725 EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 0 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.725 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.725 00:05:19.725 00:05:19.725 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.725 http://cunit.sourceforge.net/ 00:05:19.725 00:05:19.725 00:05:19.725 Suite: components_suite 00:05:19.725 Test: vtophys_malloc_test ...passed 00:05:19.725 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 4 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.725 EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 4 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.725 EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 4 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.725 EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 4 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.725 EAL: Trying to obtain current memory policy. 00:05:19.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.725 EAL: Restoring previous memory policy: 4 00:05:19.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.725 EAL: request: mp_malloc_sync 00:05:19.725 EAL: No shared files mode enabled, IPC is disabled 00:05:19.725 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.007 EAL: Trying to obtain current memory policy. 00:05:20.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.007 EAL: Restoring previous memory policy: 4 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.007 EAL: Trying to obtain current memory policy. 00:05:20.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.007 EAL: Restoring previous memory policy: 4 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.007 EAL: Trying to obtain current memory policy. 00:05:20.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.007 EAL: Restoring previous memory policy: 4 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.007 EAL: request: mp_malloc_sync 00:05:20.007 EAL: No shared files mode enabled, IPC is disabled 00:05:20.007 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.265 EAL: request: mp_malloc_sync 00:05:20.265 EAL: No shared files mode enabled, IPC is disabled 00:05:20.265 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.265 EAL: Trying to obtain current memory policy. 00:05:20.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.265 EAL: Restoring previous memory policy: 4 00:05:20.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.265 EAL: request: mp_malloc_sync 00:05:20.265 EAL: No shared files mode enabled, IPC is disabled 00:05:20.265 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.522 EAL: request: mp_malloc_sync 00:05:20.522 EAL: No shared files mode enabled, IPC is disabled 00:05:20.522 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.522 EAL: Trying to obtain current memory policy. 00:05:20.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.780 EAL: Restoring previous memory policy: 4 00:05:20.780 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.780 EAL: request: mp_malloc_sync 00:05:20.780 EAL: No shared files mode enabled, IPC is disabled 00:05:20.780 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.295 EAL: request: mp_malloc_sync 00:05:21.295 EAL: No shared files mode enabled, IPC is disabled 00:05:21.295 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.295 passed 00:05:21.295 00:05:21.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.295 suites 1 1 n/a 0 0 00:05:21.295 tests 2 2 2 0 0 00:05:21.295 asserts 497 497 497 0 n/a 00:05:21.295 00:05:21.295 Elapsed time = 1.424 seconds 00:05:21.295 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.295 EAL: request: mp_malloc_sync 00:05:21.295 EAL: No shared files mode enabled, IPC is disabled 00:05:21.295 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.295 EAL: No shared files mode enabled, IPC is disabled 00:05:21.295 EAL: No shared files mode enabled, IPC is disabled 00:05:21.295 EAL: No shared files mode enabled, IPC is disabled 00:05:21.295 00:05:21.295 real 0m1.538s 00:05:21.295 user 0m0.889s 00:05:21.295 sys 0m0.617s 00:05:21.295 23:45:26 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.295 23:45:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.295 ************************************ 00:05:21.295 END TEST env_vtophys 00:05:21.295 ************************************ 00:05:21.295 23:45:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.295 23:45:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.295 23:45:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.295 23:45:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.295 ************************************ 00:05:21.295 START TEST env_pci 00:05:21.295 ************************************ 00:05:21.295 23:45:26 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.295 00:05:21.295 00:05:21.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.295 http://cunit.sourceforge.net/ 00:05:21.295 00:05:21.295 00:05:21.295 Suite: pci 00:05:21.295 Test: pci_hook ...[2024-07-24 23:45:26.932749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 652731 has claimed it 00:05:21.295 EAL: Cannot find device (10000:00:01.0) 00:05:21.295 EAL: Failed to attach device on primary process 00:05:21.295 passed 00:05:21.295 00:05:21.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.295 suites 1 1 n/a 0 0 00:05:21.295 tests 1 1 1 0 0 00:05:21.295 asserts 25 25 25 0 n/a 00:05:21.295 00:05:21.295 Elapsed time = 0.021 seconds 00:05:21.295 00:05:21.295 real 0m0.032s 00:05:21.295 user 0m0.011s 00:05:21.295 sys 0m0.021s 00:05:21.295 23:45:26 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.295 23:45:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.295 ************************************ 00:05:21.295 END TEST env_pci 00:05:21.295 ************************************ 00:05:21.295 23:45:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.295 23:45:26 env -- env/env.sh@15 -- # uname 00:05:21.295 23:45:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.295 23:45:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.295 23:45:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.295 23:45:26 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:21.295 23:45:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.295 23:45:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.295 ************************************ 00:05:21.295 START TEST env_dpdk_post_init 00:05:21.295 ************************************ 00:05:21.295 23:45:27 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.554 EAL: Detected CPU lcores: 48 00:05:21.554 EAL: Detected NUMA nodes: 2 00:05:21.554 EAL: Detected shared linkage of DPDK 00:05:21.554 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.554 EAL: Selected IOVA mode 'VA' 00:05:21.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.554 EAL: VFIO support initialized 00:05:21.554 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.554 EAL: Using IOMMU type 1 (Type 1) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:21.554 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:22.491 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:22.491 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:25.770 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:25.770 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:05:25.770 Starting DPDK initialization... 00:05:25.770 Starting SPDK post initialization... 00:05:25.770 SPDK NVMe probe 00:05:25.770 Attaching to 0000:0b:00.0 00:05:25.770 Attached to 0000:0b:00.0 00:05:25.770 Cleaning up... 00:05:25.770 00:05:25.770 real 0m4.329s 00:05:25.770 user 0m3.204s 00:05:25.770 sys 0m0.180s 00:05:25.770 23:45:31 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.770 23:45:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 END TEST env_dpdk_post_init 00:05:25.770 ************************************ 00:05:25.770 23:45:31 env -- env/env.sh@26 -- # uname 00:05:25.770 23:45:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.770 23:45:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.770 23:45:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.770 23:45:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.770 23:45:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 START TEST env_mem_callbacks 00:05:25.770 ************************************ 00:05:25.770 23:45:31 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.770 EAL: Detected CPU lcores: 48 00:05:25.770 EAL: Detected NUMA nodes: 2 00:05:25.770 EAL: Detected shared linkage of DPDK 00:05:25.770 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.770 EAL: Selected IOVA mode 'VA' 00:05:25.770 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.770 EAL: VFIO support initialized 00:05:25.770 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.770 00:05:25.770 00:05:25.770 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.770 http://cunit.sourceforge.net/ 00:05:25.770 00:05:25.770 00:05:25.770 Suite: memory 00:05:25.770 Test: test ... 00:05:25.770 register 0x200000200000 2097152 00:05:25.770 malloc 3145728 00:05:25.770 register 0x200000400000 4194304 00:05:25.770 buf 0x200000500000 len 3145728 PASSED 00:05:25.770 malloc 64 00:05:25.770 buf 0x2000004fff40 len 64 PASSED 00:05:25.770 malloc 4194304 00:05:25.770 register 0x200000800000 6291456 00:05:25.770 buf 0x200000a00000 len 4194304 PASSED 00:05:25.770 free 0x200000500000 3145728 00:05:25.770 free 0x2000004fff40 64 00:05:25.770 unregister 0x200000400000 4194304 PASSED 00:05:25.770 free 0x200000a00000 4194304 00:05:25.770 unregister 0x200000800000 6291456 PASSED 00:05:25.770 malloc 8388608 00:05:25.770 register 0x200000400000 10485760 00:05:25.770 buf 0x200000600000 len 8388608 PASSED 00:05:25.770 free 0x200000600000 8388608 00:05:25.770 unregister 0x200000400000 10485760 PASSED 00:05:25.770 passed 00:05:25.770 00:05:25.770 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.770 suites 1 1 n/a 0 0 00:05:25.770 tests 1 1 1 0 0 00:05:25.770 asserts 15 15 15 0 n/a 00:05:25.770 00:05:25.770 Elapsed time = 0.005 seconds 00:05:25.770 00:05:25.770 real 0m0.047s 00:05:25.770 user 0m0.014s 00:05:25.770 sys 0m0.033s 00:05:25.770 23:45:31 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.770 23:45:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 END TEST env_mem_callbacks 00:05:25.770 ************************************ 00:05:25.770 00:05:25.770 real 0m6.376s 00:05:25.770 user 0m4.372s 00:05:25.770 sys 0m1.043s 00:05:25.770 23:45:31 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.770 23:45:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 END TEST env 00:05:25.770 ************************************ 00:05:25.771 23:45:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.771 23:45:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.771 23:45:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.771 23:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.028 ************************************ 00:05:26.028 START TEST rpc 00:05:26.028 ************************************ 00:05:26.028 23:45:31 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.028 * Looking for test storage... 00:05:26.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.028 23:45:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=653381 00:05:26.028 23:45:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:26.028 23:45:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.029 23:45:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 653381 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@827 -- # '[' -z 653381 ']' 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.029 23:45:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 [2024-07-24 23:45:31.593863] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:26.029 [2024-07-24 23:45:31.593946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653381 ] 00:05:26.029 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.029 [2024-07-24 23:45:31.649919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.029 [2024-07-24 23:45:31.737061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:26.029 [2024-07-24 23:45:31.737131] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 653381' to capture a snapshot of events at runtime. 00:05:26.029 [2024-07-24 23:45:31.737164] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.029 [2024-07-24 23:45:31.737176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.029 [2024-07-24 23:45:31.737186] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid653381 for offline analysis/debug. 00:05:26.029 [2024-07-24 23:45:31.737212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.595 23:45:32 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.595 23:45:32 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.595 23:45:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.595 23:45:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.595 23:45:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.595 23:45:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.595 23:45:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.595 23:45:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.595 23:45:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 ************************************ 00:05:26.595 START TEST rpc_integrity 00:05:26.595 ************************************ 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.595 { 00:05:26.595 "name": "Malloc0", 00:05:26.595 "aliases": [ 00:05:26.595 "37559d40-bedc-44ef-9b7b-00d76886ee6f" 00:05:26.595 ], 00:05:26.595 "product_name": "Malloc disk", 00:05:26.595 "block_size": 512, 00:05:26.595 "num_blocks": 16384, 00:05:26.595 "uuid": "37559d40-bedc-44ef-9b7b-00d76886ee6f", 00:05:26.595 "assigned_rate_limits": { 00:05:26.595 "rw_ios_per_sec": 0, 00:05:26.595 "rw_mbytes_per_sec": 0, 00:05:26.595 "r_mbytes_per_sec": 0, 00:05:26.595 "w_mbytes_per_sec": 0 00:05:26.595 }, 00:05:26.595 "claimed": false, 00:05:26.595 "zoned": false, 00:05:26.595 "supported_io_types": { 00:05:26.595 "read": true, 00:05:26.595 "write": true, 00:05:26.595 "unmap": true, 00:05:26.595 "write_zeroes": true, 00:05:26.595 "flush": true, 00:05:26.595 "reset": true, 00:05:26.595 "compare": false, 00:05:26.595 "compare_and_write": false, 00:05:26.595 "abort": true, 00:05:26.595 "nvme_admin": false, 00:05:26.595 "nvme_io": false 00:05:26.595 }, 00:05:26.595 "memory_domains": [ 00:05:26.595 { 00:05:26.595 "dma_device_id": "system", 00:05:26.595 "dma_device_type": 1 00:05:26.595 }, 00:05:26.595 { 00:05:26.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.595 "dma_device_type": 2 00:05:26.595 } 00:05:26.595 ], 00:05:26.595 "driver_specific": {} 00:05:26.595 } 00:05:26.595 ]' 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 [2024-07-24 23:45:32.138965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.595 [2024-07-24 23:45:32.139013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.595 [2024-07-24 23:45:32.139037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9c58f0 00:05:26.595 [2024-07-24 23:45:32.139052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.595 [2024-07-24 23:45:32.140493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.595 [2024-07-24 23:45:32.140522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.595 Passthru0 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.595 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.595 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.595 { 00:05:26.595 "name": "Malloc0", 00:05:26.595 "aliases": [ 00:05:26.595 "37559d40-bedc-44ef-9b7b-00d76886ee6f" 00:05:26.595 ], 00:05:26.595 "product_name": "Malloc disk", 00:05:26.595 "block_size": 512, 00:05:26.595 "num_blocks": 16384, 00:05:26.595 "uuid": "37559d40-bedc-44ef-9b7b-00d76886ee6f", 00:05:26.595 "assigned_rate_limits": { 00:05:26.595 "rw_ios_per_sec": 0, 00:05:26.595 "rw_mbytes_per_sec": 0, 00:05:26.595 "r_mbytes_per_sec": 0, 00:05:26.595 "w_mbytes_per_sec": 0 00:05:26.595 }, 00:05:26.595 "claimed": true, 00:05:26.595 "claim_type": "exclusive_write", 00:05:26.595 "zoned": false, 00:05:26.595 "supported_io_types": { 00:05:26.595 "read": true, 00:05:26.595 "write": true, 00:05:26.595 "unmap": true, 00:05:26.595 "write_zeroes": true, 00:05:26.595 "flush": true, 00:05:26.595 "reset": true, 00:05:26.595 "compare": false, 00:05:26.595 "compare_and_write": false, 00:05:26.595 "abort": true, 00:05:26.595 "nvme_admin": false, 00:05:26.595 "nvme_io": false 00:05:26.595 }, 00:05:26.595 "memory_domains": [ 00:05:26.595 { 00:05:26.595 "dma_device_id": "system", 00:05:26.595 "dma_device_type": 1 00:05:26.595 }, 00:05:26.595 { 00:05:26.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.595 "dma_device_type": 2 00:05:26.595 } 00:05:26.595 ], 00:05:26.595 "driver_specific": {} 00:05:26.595 }, 00:05:26.595 { 00:05:26.595 "name": "Passthru0", 00:05:26.595 "aliases": [ 00:05:26.595 "f40595f7-e713-5d95-acee-4818fa13933e" 00:05:26.595 ], 00:05:26.595 "product_name": "passthru", 00:05:26.595 "block_size": 512, 00:05:26.595 "num_blocks": 16384, 00:05:26.595 "uuid": "f40595f7-e713-5d95-acee-4818fa13933e", 00:05:26.595 "assigned_rate_limits": { 00:05:26.595 "rw_ios_per_sec": 0, 00:05:26.595 "rw_mbytes_per_sec": 0, 00:05:26.595 "r_mbytes_per_sec": 0, 00:05:26.595 "w_mbytes_per_sec": 0 00:05:26.595 }, 00:05:26.595 "claimed": false, 00:05:26.595 "zoned": false, 00:05:26.595 "supported_io_types": { 00:05:26.595 "read": true, 00:05:26.595 "write": true, 00:05:26.595 "unmap": true, 00:05:26.595 "write_zeroes": true, 00:05:26.596 "flush": true, 00:05:26.596 "reset": true, 00:05:26.596 "compare": false, 00:05:26.596 "compare_and_write": false, 00:05:26.596 "abort": true, 00:05:26.596 "nvme_admin": false, 00:05:26.596 "nvme_io": false 00:05:26.596 }, 00:05:26.596 "memory_domains": [ 00:05:26.596 { 00:05:26.596 "dma_device_id": "system", 00:05:26.596 "dma_device_type": 1 00:05:26.596 }, 00:05:26.596 { 00:05:26.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.596 "dma_device_type": 2 00:05:26.596 } 00:05:26.596 ], 00:05:26.596 "driver_specific": { 00:05:26.596 "passthru": { 00:05:26.596 "name": "Passthru0", 00:05:26.596 "base_bdev_name": "Malloc0" 00:05:26.596 } 00:05:26.596 } 00:05:26.596 } 00:05:26.596 ]' 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.596 23:45:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.596 00:05:26.596 real 0m0.225s 00:05:26.596 user 0m0.148s 00:05:26.596 sys 0m0.019s 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.596 23:45:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.596 ************************************ 00:05:26.596 END TEST rpc_integrity 00:05:26.596 ************************************ 00:05:26.596 23:45:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.596 23:45:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.596 23:45:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.596 23:45:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.596 ************************************ 00:05:26.596 START TEST rpc_plugins 00:05:26.596 ************************************ 00:05:26.596 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:26.596 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.596 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.596 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.854 { 00:05:26.854 "name": "Malloc1", 00:05:26.854 "aliases": [ 00:05:26.854 "688523c9-008b-4d4d-a107-ce4b607feebf" 00:05:26.854 ], 00:05:26.854 "product_name": "Malloc disk", 00:05:26.854 "block_size": 4096, 00:05:26.854 "num_blocks": 256, 00:05:26.854 "uuid": "688523c9-008b-4d4d-a107-ce4b607feebf", 00:05:26.854 "assigned_rate_limits": { 00:05:26.854 "rw_ios_per_sec": 0, 00:05:26.854 "rw_mbytes_per_sec": 0, 00:05:26.854 "r_mbytes_per_sec": 0, 00:05:26.854 "w_mbytes_per_sec": 0 00:05:26.854 }, 00:05:26.854 "claimed": false, 00:05:26.854 "zoned": false, 00:05:26.854 "supported_io_types": { 00:05:26.854 "read": true, 00:05:26.854 "write": true, 00:05:26.854 "unmap": true, 00:05:26.854 "write_zeroes": true, 00:05:26.854 "flush": true, 00:05:26.854 "reset": true, 00:05:26.854 "compare": false, 00:05:26.854 "compare_and_write": false, 00:05:26.854 "abort": true, 00:05:26.854 "nvme_admin": false, 00:05:26.854 "nvme_io": false 00:05:26.854 }, 00:05:26.854 "memory_domains": [ 00:05:26.854 { 00:05:26.854 "dma_device_id": "system", 00:05:26.854 "dma_device_type": 1 00:05:26.854 }, 00:05:26.854 { 00:05:26.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.854 "dma_device_type": 2 00:05:26.854 } 00:05:26.854 ], 00:05:26.854 "driver_specific": {} 00:05:26.854 } 00:05:26.854 ]' 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.854 23:45:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.854 00:05:26.854 real 0m0.110s 00:05:26.854 user 0m0.073s 00:05:26.854 sys 0m0.011s 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.854 23:45:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 ************************************ 00:05:26.854 END TEST rpc_plugins 00:05:26.854 ************************************ 00:05:26.854 23:45:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.854 23:45:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.854 23:45:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.854 23:45:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 ************************************ 00:05:26.854 START TEST rpc_trace_cmd_test 00:05:26.854 ************************************ 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.854 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid653381", 00:05:26.854 "tpoint_group_mask": "0x8", 00:05:26.854 "iscsi_conn": { 00:05:26.854 "mask": "0x2", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "scsi": { 00:05:26.854 "mask": "0x4", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "bdev": { 00:05:26.854 "mask": "0x8", 00:05:26.854 "tpoint_mask": "0xffffffffffffffff" 00:05:26.854 }, 00:05:26.854 "nvmf_rdma": { 00:05:26.854 "mask": "0x10", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "nvmf_tcp": { 00:05:26.854 "mask": "0x20", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "ftl": { 00:05:26.854 "mask": "0x40", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "blobfs": { 00:05:26.854 "mask": "0x80", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "dsa": { 00:05:26.854 "mask": "0x200", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "thread": { 00:05:26.854 "mask": "0x400", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "nvme_pcie": { 00:05:26.854 "mask": "0x800", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "iaa": { 00:05:26.854 "mask": "0x1000", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "nvme_tcp": { 00:05:26.854 "mask": "0x2000", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "bdev_nvme": { 00:05:26.854 "mask": "0x4000", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 }, 00:05:26.854 "sock": { 00:05:26.854 "mask": "0x8000", 00:05:26.854 "tpoint_mask": "0x0" 00:05:26.854 } 00:05:26.854 }' 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.854 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.111 00:05:27.111 real 0m0.196s 00:05:27.111 user 0m0.170s 00:05:27.111 sys 0m0.018s 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.111 23:45:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.111 ************************************ 00:05:27.111 END TEST rpc_trace_cmd_test 00:05:27.111 ************************************ 00:05:27.111 23:45:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.111 23:45:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.111 23:45:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.111 23:45:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.111 23:45:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.111 23:45:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.111 ************************************ 00:05:27.111 START TEST rpc_daemon_integrity 00:05:27.111 ************************************ 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.111 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.112 { 00:05:27.112 "name": "Malloc2", 00:05:27.112 "aliases": [ 00:05:27.112 "1657e21c-f621-4dff-8d8e-d06d064613e3" 00:05:27.112 ], 00:05:27.112 "product_name": "Malloc disk", 00:05:27.112 "block_size": 512, 00:05:27.112 "num_blocks": 16384, 00:05:27.112 "uuid": "1657e21c-f621-4dff-8d8e-d06d064613e3", 00:05:27.112 "assigned_rate_limits": { 00:05:27.112 "rw_ios_per_sec": 0, 00:05:27.112 "rw_mbytes_per_sec": 0, 00:05:27.112 "r_mbytes_per_sec": 0, 00:05:27.112 "w_mbytes_per_sec": 0 00:05:27.112 }, 00:05:27.112 "claimed": false, 00:05:27.112 "zoned": false, 00:05:27.112 "supported_io_types": { 00:05:27.112 "read": true, 00:05:27.112 "write": true, 00:05:27.112 "unmap": true, 00:05:27.112 "write_zeroes": true, 00:05:27.112 "flush": true, 00:05:27.112 "reset": true, 00:05:27.112 "compare": false, 00:05:27.112 "compare_and_write": false, 00:05:27.112 "abort": true, 00:05:27.112 "nvme_admin": false, 00:05:27.112 "nvme_io": false 00:05:27.112 }, 00:05:27.112 "memory_domains": [ 00:05:27.112 { 00:05:27.112 "dma_device_id": "system", 00:05:27.112 "dma_device_type": 1 00:05:27.112 }, 00:05:27.112 { 00:05:27.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.112 "dma_device_type": 2 00:05:27.112 } 00:05:27.112 ], 00:05:27.112 "driver_specific": {} 00:05:27.112 } 00:05:27.112 ]' 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 [2024-07-24 23:45:32.804902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.112 [2024-07-24 23:45:32.804947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.112 [2024-07-24 23:45:32.804972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8c0600 00:05:27.112 [2024-07-24 23:45:32.804987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.112 [2024-07-24 23:45:32.806434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.112 [2024-07-24 23:45:32.806477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.112 Passthru0 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.112 { 00:05:27.112 "name": "Malloc2", 00:05:27.112 "aliases": [ 00:05:27.112 "1657e21c-f621-4dff-8d8e-d06d064613e3" 00:05:27.112 ], 00:05:27.112 "product_name": "Malloc disk", 00:05:27.112 "block_size": 512, 00:05:27.112 "num_blocks": 16384, 00:05:27.112 "uuid": "1657e21c-f621-4dff-8d8e-d06d064613e3", 00:05:27.112 "assigned_rate_limits": { 00:05:27.112 "rw_ios_per_sec": 0, 00:05:27.112 "rw_mbytes_per_sec": 0, 00:05:27.112 "r_mbytes_per_sec": 0, 00:05:27.112 "w_mbytes_per_sec": 0 00:05:27.112 }, 00:05:27.112 "claimed": true, 00:05:27.112 "claim_type": "exclusive_write", 00:05:27.112 "zoned": false, 00:05:27.112 "supported_io_types": { 00:05:27.112 "read": true, 00:05:27.112 "write": true, 00:05:27.112 "unmap": true, 00:05:27.112 "write_zeroes": true, 00:05:27.112 "flush": true, 00:05:27.112 "reset": true, 00:05:27.112 "compare": false, 00:05:27.112 "compare_and_write": false, 00:05:27.112 "abort": true, 00:05:27.112 "nvme_admin": false, 00:05:27.112 "nvme_io": false 00:05:27.112 }, 00:05:27.112 "memory_domains": [ 00:05:27.112 { 00:05:27.112 "dma_device_id": "system", 00:05:27.112 "dma_device_type": 1 00:05:27.112 }, 00:05:27.112 { 00:05:27.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.112 "dma_device_type": 2 00:05:27.112 } 00:05:27.112 ], 00:05:27.112 "driver_specific": {} 00:05:27.112 }, 00:05:27.112 { 00:05:27.112 "name": "Passthru0", 00:05:27.112 "aliases": [ 00:05:27.112 "3330043b-e531-5717-88ea-178873ed78d8" 00:05:27.112 ], 00:05:27.112 "product_name": "passthru", 00:05:27.112 "block_size": 512, 00:05:27.112 "num_blocks": 16384, 00:05:27.112 "uuid": "3330043b-e531-5717-88ea-178873ed78d8", 00:05:27.112 "assigned_rate_limits": { 00:05:27.112 "rw_ios_per_sec": 0, 00:05:27.112 "rw_mbytes_per_sec": 0, 00:05:27.112 "r_mbytes_per_sec": 0, 00:05:27.112 "w_mbytes_per_sec": 0 00:05:27.112 }, 00:05:27.112 "claimed": false, 00:05:27.112 "zoned": false, 00:05:27.112 "supported_io_types": { 00:05:27.112 "read": true, 00:05:27.112 "write": true, 00:05:27.112 "unmap": true, 00:05:27.112 "write_zeroes": true, 00:05:27.112 "flush": true, 00:05:27.112 "reset": true, 00:05:27.112 "compare": false, 00:05:27.112 "compare_and_write": false, 00:05:27.112 "abort": true, 00:05:27.112 "nvme_admin": false, 00:05:27.112 "nvme_io": false 00:05:27.112 }, 00:05:27.112 "memory_domains": [ 00:05:27.112 { 00:05:27.112 "dma_device_id": "system", 00:05:27.112 "dma_device_type": 1 00:05:27.112 }, 00:05:27.112 { 00:05:27.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.112 "dma_device_type": 2 00:05:27.112 } 00:05:27.112 ], 00:05:27.112 "driver_specific": { 00:05:27.112 "passthru": { 00:05:27.112 "name": "Passthru0", 00:05:27.112 "base_bdev_name": "Malloc2" 00:05:27.112 } 00:05:27.112 } 00:05:27.112 } 00:05:27.112 ]' 00:05:27.112 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.369 00:05:27.369 real 0m0.228s 00:05:27.369 user 0m0.148s 00:05:27.369 sys 0m0.022s 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.369 23:45:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.369 ************************************ 00:05:27.369 END TEST rpc_daemon_integrity 00:05:27.370 ************************************ 00:05:27.370 23:45:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.370 23:45:32 rpc -- rpc/rpc.sh@84 -- # killprocess 653381 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@946 -- # '[' -z 653381 ']' 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@950 -- # kill -0 653381 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@951 -- # uname 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 653381 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 653381' 00:05:27.370 killing process with pid 653381 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@965 -- # kill 653381 00:05:27.370 23:45:32 rpc -- common/autotest_common.sh@970 -- # wait 653381 00:05:27.934 00:05:27.934 real 0m1.900s 00:05:27.934 user 0m2.349s 00:05:27.934 sys 0m0.597s 00:05:27.934 23:45:33 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.934 23:45:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.934 ************************************ 00:05:27.934 END TEST rpc 00:05:27.934 ************************************ 00:05:27.934 23:45:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.934 23:45:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.934 23:45:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.934 23:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.934 ************************************ 00:05:27.934 START TEST skip_rpc 00:05:27.934 ************************************ 00:05:27.934 23:45:33 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.934 * Looking for test storage... 00:05:27.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.934 23:45:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.934 23:45:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.934 23:45:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.934 23:45:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.934 23:45:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.934 23:45:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.934 ************************************ 00:05:27.934 START TEST skip_rpc 00:05:27.934 ************************************ 00:05:27.934 23:45:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:27.934 23:45:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=653818 00:05:27.934 23:45:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.934 23:45:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.934 23:45:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.934 [2024-07-24 23:45:33.564878] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:27.934 [2024-07-24 23:45:33.564953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653818 ] 00:05:27.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.934 [2024-07-24 23:45:33.624843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.192 [2024-07-24 23:45:33.717265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 653818 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 653818 ']' 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 653818 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 653818 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 653818' 00:05:33.483 killing process with pid 653818 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 653818 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 653818 00:05:33.483 00:05:33.483 real 0m5.431s 00:05:33.483 user 0m5.108s 00:05:33.483 sys 0m0.326s 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.483 23:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.483 ************************************ 00:05:33.483 END TEST skip_rpc 00:05:33.483 ************************************ 00:05:33.483 23:45:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:33.483 23:45:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.483 23:45:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.483 23:45:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.483 ************************************ 00:05:33.483 START TEST skip_rpc_with_json 00:05:33.484 ************************************ 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=654513 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 654513 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 654513 ']' 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.484 23:45:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.484 [2024-07-24 23:45:39.046112] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:33.484 [2024-07-24 23:45:39.046204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654513 ] 00:05:33.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.484 [2024-07-24 23:45:39.106762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.745 [2024-07-24 23:45:39.204137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.006 [2024-07-24 23:45:39.475596] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.006 request: 00:05:34.006 { 00:05:34.006 "trtype": "tcp", 00:05:34.006 "method": "nvmf_get_transports", 00:05:34.006 "req_id": 1 00:05:34.006 } 00:05:34.006 Got JSON-RPC error response 00:05:34.006 response: 00:05:34.006 { 00:05:34.006 "code": -19, 00:05:34.006 "message": "No such device" 00:05:34.006 } 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.006 [2024-07-24 23:45:39.483725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.006 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.006 { 00:05:34.006 "subsystems": [ 00:05:34.006 { 00:05:34.006 "subsystem": "vfio_user_target", 00:05:34.006 "config": null 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "subsystem": "keyring", 00:05:34.006 "config": [] 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "subsystem": "iobuf", 00:05:34.006 "config": [ 00:05:34.006 { 00:05:34.006 "method": "iobuf_set_options", 00:05:34.006 "params": { 00:05:34.006 "small_pool_count": 8192, 00:05:34.006 "large_pool_count": 1024, 00:05:34.006 "small_bufsize": 8192, 00:05:34.006 "large_bufsize": 135168 00:05:34.006 } 00:05:34.006 } 00:05:34.006 ] 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "subsystem": "sock", 00:05:34.006 "config": [ 00:05:34.006 { 00:05:34.006 "method": "sock_set_default_impl", 00:05:34.006 "params": { 00:05:34.006 "impl_name": "posix" 00:05:34.006 } 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "method": "sock_impl_set_options", 00:05:34.006 "params": { 00:05:34.006 "impl_name": "ssl", 00:05:34.006 "recv_buf_size": 4096, 00:05:34.006 "send_buf_size": 4096, 00:05:34.006 "enable_recv_pipe": true, 00:05:34.006 "enable_quickack": false, 00:05:34.006 "enable_placement_id": 0, 00:05:34.006 "enable_zerocopy_send_server": true, 00:05:34.006 "enable_zerocopy_send_client": false, 00:05:34.006 "zerocopy_threshold": 0, 00:05:34.006 "tls_version": 0, 00:05:34.006 "enable_ktls": false 00:05:34.006 } 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "method": "sock_impl_set_options", 00:05:34.006 "params": { 00:05:34.006 "impl_name": "posix", 00:05:34.006 "recv_buf_size": 2097152, 00:05:34.006 "send_buf_size": 2097152, 00:05:34.006 "enable_recv_pipe": true, 00:05:34.006 "enable_quickack": false, 00:05:34.006 "enable_placement_id": 0, 00:05:34.006 "enable_zerocopy_send_server": true, 00:05:34.006 "enable_zerocopy_send_client": false, 00:05:34.006 "zerocopy_threshold": 0, 00:05:34.006 "tls_version": 0, 00:05:34.006 "enable_ktls": false 00:05:34.006 } 00:05:34.006 } 00:05:34.006 ] 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "subsystem": "vmd", 00:05:34.006 "config": [] 00:05:34.006 }, 00:05:34.006 { 00:05:34.006 "subsystem": "accel", 00:05:34.006 "config": [ 00:05:34.006 { 00:05:34.006 "method": "accel_set_options", 00:05:34.006 "params": { 00:05:34.006 "small_cache_size": 128, 00:05:34.006 "large_cache_size": 16, 00:05:34.006 "task_count": 2048, 00:05:34.006 "sequence_count": 2048, 00:05:34.007 "buf_count": 2048 00:05:34.007 } 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "bdev", 00:05:34.007 "config": [ 00:05:34.007 { 00:05:34.007 "method": "bdev_set_options", 00:05:34.007 "params": { 00:05:34.007 "bdev_io_pool_size": 65535, 00:05:34.007 "bdev_io_cache_size": 256, 00:05:34.007 "bdev_auto_examine": true, 00:05:34.007 "iobuf_small_cache_size": 128, 00:05:34.007 "iobuf_large_cache_size": 16 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "bdev_raid_set_options", 00:05:34.007 "params": { 00:05:34.007 "process_window_size_kb": 1024 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "bdev_iscsi_set_options", 00:05:34.007 "params": { 00:05:34.007 "timeout_sec": 30 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "bdev_nvme_set_options", 00:05:34.007 "params": { 00:05:34.007 "action_on_timeout": "none", 00:05:34.007 "timeout_us": 0, 00:05:34.007 "timeout_admin_us": 0, 00:05:34.007 "keep_alive_timeout_ms": 10000, 00:05:34.007 "arbitration_burst": 0, 00:05:34.007 "low_priority_weight": 0, 00:05:34.007 "medium_priority_weight": 0, 00:05:34.007 "high_priority_weight": 0, 00:05:34.007 "nvme_adminq_poll_period_us": 10000, 00:05:34.007 "nvme_ioq_poll_period_us": 0, 00:05:34.007 "io_queue_requests": 0, 00:05:34.007 "delay_cmd_submit": true, 00:05:34.007 "transport_retry_count": 4, 00:05:34.007 "bdev_retry_count": 3, 00:05:34.007 "transport_ack_timeout": 0, 00:05:34.007 "ctrlr_loss_timeout_sec": 0, 00:05:34.007 "reconnect_delay_sec": 0, 00:05:34.007 "fast_io_fail_timeout_sec": 0, 00:05:34.007 "disable_auto_failback": false, 00:05:34.007 "generate_uuids": false, 00:05:34.007 "transport_tos": 0, 00:05:34.007 "nvme_error_stat": false, 00:05:34.007 "rdma_srq_size": 0, 00:05:34.007 "io_path_stat": false, 00:05:34.007 "allow_accel_sequence": false, 00:05:34.007 "rdma_max_cq_size": 0, 00:05:34.007 "rdma_cm_event_timeout_ms": 0, 00:05:34.007 "dhchap_digests": [ 00:05:34.007 "sha256", 00:05:34.007 "sha384", 00:05:34.007 "sha512" 00:05:34.007 ], 00:05:34.007 "dhchap_dhgroups": [ 00:05:34.007 "null", 00:05:34.007 "ffdhe2048", 00:05:34.007 "ffdhe3072", 00:05:34.007 "ffdhe4096", 00:05:34.007 "ffdhe6144", 00:05:34.007 "ffdhe8192" 00:05:34.007 ] 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "bdev_nvme_set_hotplug", 00:05:34.007 "params": { 00:05:34.007 "period_us": 100000, 00:05:34.007 "enable": false 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "bdev_wait_for_examine" 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "scsi", 00:05:34.007 "config": null 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "scheduler", 00:05:34.007 "config": [ 00:05:34.007 { 00:05:34.007 "method": "framework_set_scheduler", 00:05:34.007 "params": { 00:05:34.007 "name": "static" 00:05:34.007 } 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "vhost_scsi", 00:05:34.007 "config": [] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "vhost_blk", 00:05:34.007 "config": [] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "ublk", 00:05:34.007 "config": [] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "nbd", 00:05:34.007 "config": [] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "nvmf", 00:05:34.007 "config": [ 00:05:34.007 { 00:05:34.007 "method": "nvmf_set_config", 00:05:34.007 "params": { 00:05:34.007 "discovery_filter": "match_any", 00:05:34.007 "admin_cmd_passthru": { 00:05:34.007 "identify_ctrlr": false 00:05:34.007 } 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "nvmf_set_max_subsystems", 00:05:34.007 "params": { 00:05:34.007 "max_subsystems": 1024 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "nvmf_set_crdt", 00:05:34.007 "params": { 00:05:34.007 "crdt1": 0, 00:05:34.007 "crdt2": 0, 00:05:34.007 "crdt3": 0 00:05:34.007 } 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "method": "nvmf_create_transport", 00:05:34.007 "params": { 00:05:34.007 "trtype": "TCP", 00:05:34.007 "max_queue_depth": 128, 00:05:34.007 "max_io_qpairs_per_ctrlr": 127, 00:05:34.007 "in_capsule_data_size": 4096, 00:05:34.007 "max_io_size": 131072, 00:05:34.007 "io_unit_size": 131072, 00:05:34.007 "max_aq_depth": 128, 00:05:34.007 "num_shared_buffers": 511, 00:05:34.007 "buf_cache_size": 4294967295, 00:05:34.007 "dif_insert_or_strip": false, 00:05:34.007 "zcopy": false, 00:05:34.007 "c2h_success": true, 00:05:34.007 "sock_priority": 0, 00:05:34.007 "abort_timeout_sec": 1, 00:05:34.007 "ack_timeout": 0, 00:05:34.007 "data_wr_pool_size": 0 00:05:34.007 } 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 }, 00:05:34.007 { 00:05:34.007 "subsystem": "iscsi", 00:05:34.007 "config": [ 00:05:34.007 { 00:05:34.007 "method": "iscsi_set_options", 00:05:34.007 "params": { 00:05:34.007 "node_base": "iqn.2016-06.io.spdk", 00:05:34.007 "max_sessions": 128, 00:05:34.007 "max_connections_per_session": 2, 00:05:34.007 "max_queue_depth": 64, 00:05:34.007 "default_time2wait": 2, 00:05:34.007 "default_time2retain": 20, 00:05:34.007 "first_burst_length": 8192, 00:05:34.007 "immediate_data": true, 00:05:34.007 "allow_duplicated_isid": false, 00:05:34.007 "error_recovery_level": 0, 00:05:34.007 "nop_timeout": 60, 00:05:34.007 "nop_in_interval": 30, 00:05:34.007 "disable_chap": false, 00:05:34.007 "require_chap": false, 00:05:34.007 "mutual_chap": false, 00:05:34.007 "chap_group": 0, 00:05:34.007 "max_large_datain_per_connection": 64, 00:05:34.007 "max_r2t_per_connection": 4, 00:05:34.007 "pdu_pool_size": 36864, 00:05:34.007 "immediate_data_pool_size": 16384, 00:05:34.007 "data_out_pool_size": 2048 00:05:34.007 } 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 } 00:05:34.007 ] 00:05:34.007 } 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 654513 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 654513 ']' 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 654513 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 654513 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 654513' 00:05:34.007 killing process with pid 654513 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 654513 00:05:34.007 23:45:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 654513 00:05:34.573 23:45:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=654655 00:05:34.573 23:45:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.573 23:45:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 654655 ']' 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 654655' 00:05:39.837 killing process with pid 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 654655 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.837 00:05:39.837 real 0m6.498s 00:05:39.837 user 0m6.069s 00:05:39.837 sys 0m0.744s 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.837 ************************************ 00:05:39.837 END TEST skip_rpc_with_json 00:05:39.837 ************************************ 00:05:39.837 23:45:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.837 23:45:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.837 23:45:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.837 23:45:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.837 ************************************ 00:05:39.837 START TEST skip_rpc_with_delay 00:05:39.837 ************************************ 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.837 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.096 [2024-07-24 23:45:45.591446] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.096 [2024-07-24 23:45:45.591568] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.096 00:05:40.096 real 0m0.066s 00:05:40.096 user 0m0.041s 00:05:40.096 sys 0m0.025s 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.096 23:45:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.096 ************************************ 00:05:40.096 END TEST skip_rpc_with_delay 00:05:40.096 ************************************ 00:05:40.096 23:45:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.096 23:45:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.096 23:45:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.096 23:45:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.096 23:45:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.096 23:45:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.096 ************************************ 00:05:40.096 START TEST exit_on_failed_rpc_init 00:05:40.096 ************************************ 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=655364 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 655364 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 655364 ']' 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.096 23:45:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.096 [2024-07-24 23:45:45.700205] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:40.096 [2024-07-24 23:45:45.700296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655364 ] 00:05:40.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.096 [2024-07-24 23:45:45.757315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.354 [2024-07-24 23:45:45.847572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.612 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.612 [2024-07-24 23:45:46.161191] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:40.612 [2024-07-24 23:45:46.161282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655379 ] 00:05:40.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.612 [2024-07-24 23:45:46.221498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.612 [2024-07-24 23:45:46.318022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.612 [2024-07-24 23:45:46.318151] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.612 [2024-07-24 23:45:46.318187] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.612 [2024-07-24 23:45:46.318200] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 655364 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 655364 ']' 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 655364 00:05:40.869 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 655364 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 655364' 00:05:40.870 killing process with pid 655364 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 655364 00:05:40.870 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 655364 00:05:41.128 00:05:41.128 real 0m1.185s 00:05:41.128 user 0m1.276s 00:05:41.128 sys 0m0.467s 00:05:41.128 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.128 23:45:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.128 ************************************ 00:05:41.128 END TEST exit_on_failed_rpc_init 00:05:41.128 ************************************ 00:05:41.386 23:45:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.386 00:05:41.386 real 0m13.412s 00:05:41.386 user 0m12.592s 00:05:41.386 sys 0m1.711s 00:05:41.386 23:45:46 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.387 23:45:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 ************************************ 00:05:41.387 END TEST skip_rpc 00:05:41.387 ************************************ 00:05:41.387 23:45:46 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.387 23:45:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.387 23:45:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.387 23:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 ************************************ 00:05:41.387 START TEST rpc_client 00:05:41.387 ************************************ 00:05:41.387 23:45:46 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.387 * Looking for test storage... 00:05:41.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:41.387 23:45:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.387 OK 00:05:41.387 23:45:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.387 00:05:41.387 real 0m0.067s 00:05:41.387 user 0m0.027s 00:05:41.387 sys 0m0.045s 00:05:41.387 23:45:46 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.387 23:45:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 ************************************ 00:05:41.387 END TEST rpc_client 00:05:41.387 ************************************ 00:05:41.387 23:45:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.387 23:45:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.387 23:45:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.387 23:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 ************************************ 00:05:41.387 START TEST json_config 00:05:41.387 ************************************ 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.387 23:45:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.387 23:45:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.387 23:45:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.387 23:45:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.387 23:45:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.387 23:45:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.387 23:45:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.387 23:45:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@47 -- # : 0 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.387 23:45:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:41.387 INFO: JSON configuration test init 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.387 23:45:47 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.387 23:45:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.387 23:45:47 json_config -- json_config/common.sh@10 -- # shift 00:05:41.387 23:45:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.387 23:45:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.387 23:45:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.387 23:45:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.387 23:45:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.387 23:45:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=655621 00:05:41.387 23:45:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.387 23:45:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.387 Waiting for target to run... 00:05:41.387 23:45:47 json_config -- json_config/common.sh@25 -- # waitforlisten 655621 /var/tmp/spdk_tgt.sock 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@827 -- # '[' -z 655621 ']' 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.387 23:45:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.646 [2024-07-24 23:45:47.122394] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:41.646 [2024-07-24 23:45:47.122479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655621 ] 00:05:41.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.904 [2024-07-24 23:45:47.462218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.904 [2024-07-24 23:45:47.524992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:42.469 23:45:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.469 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.469 23:45:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.469 23:45:48 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:42.469 23:45:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:45.753 23:45:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.753 23:45:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:45.753 23:45:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:45.753 23:45:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:46.011 23:45:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.011 23:45:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:46.011 23:45:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.011 23:45:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:46.011 23:45:51 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.011 23:45:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.269 MallocForNvmf0 00:05:46.269 23:45:51 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.269 23:45:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.269 MallocForNvmf1 00:05:46.526 23:45:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.527 23:45:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.527 [2024-07-24 23:45:52.229708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.784 23:45:52 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.784 23:45:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.784 23:45:52 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.784 23:45:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.041 23:45:52 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.041 23:45:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.299 23:45:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.299 23:45:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.556 [2024-07-24 23:45:53.208874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.556 23:45:53 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:47.556 23:45:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.556 23:45:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.556 23:45:53 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:47.556 23:45:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.556 23:45:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.556 23:45:53 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:47.556 23:45:53 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.556 23:45:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.815 MallocBdevForConfigChangeCheck 00:05:47.815 23:45:53 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:47.815 23:45:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.815 23:45:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.073 23:45:53 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:48.073 23:45:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.331 23:45:53 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:48.331 INFO: shutting down applications... 00:05:48.331 23:45:53 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:48.331 23:45:53 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:48.331 23:45:53 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:48.331 23:45:53 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.230 Calling clear_iscsi_subsystem 00:05:50.230 Calling clear_nvmf_subsystem 00:05:50.230 Calling clear_nbd_subsystem 00:05:50.230 Calling clear_ublk_subsystem 00:05:50.230 Calling clear_vhost_blk_subsystem 00:05:50.230 Calling clear_vhost_scsi_subsystem 00:05:50.230 Calling clear_bdev_subsystem 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@345 -- # break 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:50.230 23:45:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:50.230 23:45:55 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.230 23:45:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.230 23:45:55 json_config -- json_config/common.sh@35 -- # [[ -n 655621 ]] 00:05:50.230 23:45:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 655621 00:05:50.230 23:45:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.230 23:45:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.230 23:45:55 json_config -- json_config/common.sh@41 -- # kill -0 655621 00:05:50.230 23:45:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.797 23:45:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.797 23:45:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.797 23:45:56 json_config -- json_config/common.sh@41 -- # kill -0 655621 00:05:50.797 23:45:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.797 23:45:56 json_config -- json_config/common.sh@43 -- # break 00:05:50.797 23:45:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.797 23:45:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.797 SPDK target shutdown done 00:05:50.797 23:45:56 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:50.797 INFO: relaunching applications... 00:05:50.797 23:45:56 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.797 23:45:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.797 23:45:56 json_config -- json_config/common.sh@10 -- # shift 00:05:50.797 23:45:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.797 23:45:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.797 23:45:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.797 23:45:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.797 23:45:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.797 23:45:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=656812 00:05:50.797 23:45:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.797 23:45:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.797 Waiting for target to run... 00:05:50.797 23:45:56 json_config -- json_config/common.sh@25 -- # waitforlisten 656812 /var/tmp/spdk_tgt.sock 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@827 -- # '[' -z 656812 ']' 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.797 23:45:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.797 [2024-07-24 23:45:56.440599] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:50.797 [2024-07-24 23:45:56.440683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656812 ] 00:05:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.365 [2024-07-24 23:45:56.976933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.365 [2024-07-24 23:45:57.057196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.647 [2024-07-24 23:46:00.087208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.647 [2024-07-24 23:46:00.119642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.647 23:46:00 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.647 23:46:00 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:54.647 23:46:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.647 00:05:54.647 23:46:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:54.647 23:46:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.647 INFO: Checking if target configuration is the same... 00:05:54.647 23:46:00 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.647 23:46:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:54.647 23:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.647 + '[' 2 -ne 2 ']' 00:05:54.647 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.647 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.647 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.647 +++ basename /dev/fd/62 00:05:54.647 ++ mktemp /tmp/62.XXX 00:05:54.647 + tmp_file_1=/tmp/62.4KQ 00:05:54.647 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.647 + tmp_file_2=/tmp/spdk_tgt_config.json.Z1s 00:05:54.647 + ret=0 00:05:54.647 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.941 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.941 + diff -u /tmp/62.4KQ /tmp/spdk_tgt_config.json.Z1s 00:05:54.941 + echo 'INFO: JSON config files are the same' 00:05:54.941 INFO: JSON config files are the same 00:05:54.941 + rm /tmp/62.4KQ /tmp/spdk_tgt_config.json.Z1s 00:05:54.941 + exit 0 00:05:54.941 23:46:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:54.941 23:46:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.941 INFO: changing configuration and checking if this can be detected... 00:05:54.941 23:46:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.942 23:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.199 23:46:00 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.199 23:46:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:55.199 23:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.199 + '[' 2 -ne 2 ']' 00:05:55.199 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.199 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.199 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.199 +++ basename /dev/fd/62 00:05:55.199 ++ mktemp /tmp/62.XXX 00:05:55.199 + tmp_file_1=/tmp/62.gJN 00:05:55.199 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.199 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.199 + tmp_file_2=/tmp/spdk_tgt_config.json.5hw 00:05:55.199 + ret=0 00:05:55.199 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.764 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.764 + diff -u /tmp/62.gJN /tmp/spdk_tgt_config.json.5hw 00:05:55.764 + ret=1 00:05:55.764 + echo '=== Start of file: /tmp/62.gJN ===' 00:05:55.764 + cat /tmp/62.gJN 00:05:55.764 + echo '=== End of file: /tmp/62.gJN ===' 00:05:55.764 + echo '' 00:05:55.764 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5hw ===' 00:05:55.764 + cat /tmp/spdk_tgt_config.json.5hw 00:05:55.764 + echo '=== End of file: /tmp/spdk_tgt_config.json.5hw ===' 00:05:55.764 + echo '' 00:05:55.764 + rm /tmp/62.gJN /tmp/spdk_tgt_config.json.5hw 00:05:55.764 + exit 1 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:55.764 INFO: configuration change detected. 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 656812 ]] 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.764 23:46:01 json_config -- json_config/json_config.sh@323 -- # killprocess 656812 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@946 -- # '[' -z 656812 ']' 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@950 -- # kill -0 656812 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@951 -- # uname 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 656812 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 656812' 00:05:55.764 killing process with pid 656812 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@965 -- # kill 656812 00:05:55.764 23:46:01 json_config -- common/autotest_common.sh@970 -- # wait 656812 00:05:57.664 23:46:02 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.664 23:46:02 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:57.664 23:46:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.664 23:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.664 23:46:02 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:57.664 23:46:02 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:57.664 INFO: Success 00:05:57.664 00:05:57.664 real 0m15.877s 00:05:57.664 user 0m17.614s 00:05:57.664 sys 0m2.030s 00:05:57.664 23:46:02 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.664 23:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.664 ************************************ 00:05:57.664 END TEST json_config 00:05:57.664 ************************************ 00:05:57.664 23:46:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.664 23:46:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.664 23:46:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.664 23:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.664 ************************************ 00:05:57.664 START TEST json_config_extra_key 00:05:57.664 ************************************ 00:05:57.664 23:46:02 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.664 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.664 23:46:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.664 23:46:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.665 23:46:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.665 23:46:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.665 23:46:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.665 23:46:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.665 23:46:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.665 23:46:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.665 23:46:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.665 23:46:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.665 INFO: launching applications... 00:05:57.665 23:46:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=657718 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.665 Waiting for target to run... 00:05:57.665 23:46:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 657718 /var/tmp/spdk_tgt.sock 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 657718 ']' 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.665 23:46:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.665 [2024-07-24 23:46:03.039050] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:57.665 [2024-07-24 23:46:03.039167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657718 ] 00:05:57.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.923 [2024-07-24 23:46:03.388997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.923 [2024-07-24 23:46:03.451915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.489 23:46:03 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.489 23:46:03 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:58.489 23:46:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.489 00:05:58.489 23:46:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.489 INFO: shutting down applications... 00:05:58.489 23:46:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 657718 ]] 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 657718 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 657718 00:05:58.490 23:46:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 657718 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.058 23:46:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.058 SPDK target shutdown done 00:05:59.058 23:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.058 Success 00:05:59.058 00:05:59.058 real 0m1.537s 00:05:59.058 user 0m1.488s 00:05:59.058 sys 0m0.439s 00:05:59.058 23:46:04 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.058 23:46:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.058 ************************************ 00:05:59.058 END TEST json_config_extra_key 00:05:59.058 ************************************ 00:05:59.058 23:46:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.058 23:46:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.058 23:46:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.058 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.058 ************************************ 00:05:59.058 START TEST alias_rpc 00:05:59.058 ************************************ 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.058 * Looking for test storage... 00:05:59.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.058 23:46:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.058 23:46:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=657954 00:05:59.058 23:46:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.058 23:46:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 657954 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 657954 ']' 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.058 23:46:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.058 [2024-07-24 23:46:04.627224] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:59.058 [2024-07-24 23:46:04.627320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657954 ] 00:05:59.058 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.058 [2024-07-24 23:46:04.686688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.058 [2024-07-24 23:46:04.772258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.316 23:46:05 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.316 23:46:05 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:59.316 23:46:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.883 23:46:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 657954 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 657954 ']' 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 657954 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 657954 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 657954' 00:05:59.883 killing process with pid 657954 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@965 -- # kill 657954 00:05:59.883 23:46:05 alias_rpc -- common/autotest_common.sh@970 -- # wait 657954 00:06:00.142 00:06:00.142 real 0m1.244s 00:06:00.142 user 0m1.313s 00:06:00.142 sys 0m0.439s 00:06:00.142 23:46:05 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.142 23:46:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.142 ************************************ 00:06:00.142 END TEST alias_rpc 00:06:00.142 ************************************ 00:06:00.142 23:46:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:00.142 23:46:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.142 23:46:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.142 23:46:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.142 23:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.142 ************************************ 00:06:00.142 START TEST spdkcli_tcp 00:06:00.142 ************************************ 00:06:00.142 23:46:05 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.401 * Looking for test storage... 00:06:00.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=658212 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.401 23:46:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 658212 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 658212 ']' 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.401 23:46:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.401 [2024-07-24 23:46:05.917197] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:00.401 [2024-07-24 23:46:05.917280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658212 ] 00:06:00.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.401 [2024-07-24 23:46:05.981053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.401 [2024-07-24 23:46:06.078127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.401 [2024-07-24 23:46:06.078132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.660 23:46:06 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.660 23:46:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:00.660 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=658230 00:06:00.660 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.660 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.918 [ 00:06:00.918 "bdev_malloc_delete", 00:06:00.918 "bdev_malloc_create", 00:06:00.918 "bdev_null_resize", 00:06:00.918 "bdev_null_delete", 00:06:00.918 "bdev_null_create", 00:06:00.918 "bdev_nvme_cuse_unregister", 00:06:00.918 "bdev_nvme_cuse_register", 00:06:00.918 "bdev_opal_new_user", 00:06:00.918 "bdev_opal_set_lock_state", 00:06:00.918 "bdev_opal_delete", 00:06:00.918 "bdev_opal_get_info", 00:06:00.918 "bdev_opal_create", 00:06:00.918 "bdev_nvme_opal_revert", 00:06:00.918 "bdev_nvme_opal_init", 00:06:00.918 "bdev_nvme_send_cmd", 00:06:00.918 "bdev_nvme_get_path_iostat", 00:06:00.918 "bdev_nvme_get_mdns_discovery_info", 00:06:00.918 "bdev_nvme_stop_mdns_discovery", 00:06:00.918 "bdev_nvme_start_mdns_discovery", 00:06:00.918 "bdev_nvme_set_multipath_policy", 00:06:00.918 "bdev_nvme_set_preferred_path", 00:06:00.918 "bdev_nvme_get_io_paths", 00:06:00.918 "bdev_nvme_remove_error_injection", 00:06:00.918 "bdev_nvme_add_error_injection", 00:06:00.918 "bdev_nvme_get_discovery_info", 00:06:00.918 "bdev_nvme_stop_discovery", 00:06:00.918 "bdev_nvme_start_discovery", 00:06:00.918 "bdev_nvme_get_controller_health_info", 00:06:00.918 "bdev_nvme_disable_controller", 00:06:00.918 "bdev_nvme_enable_controller", 00:06:00.918 "bdev_nvme_reset_controller", 00:06:00.918 "bdev_nvme_get_transport_statistics", 00:06:00.918 "bdev_nvme_apply_firmware", 00:06:00.919 "bdev_nvme_detach_controller", 00:06:00.919 "bdev_nvme_get_controllers", 00:06:00.919 "bdev_nvme_attach_controller", 00:06:00.919 "bdev_nvme_set_hotplug", 00:06:00.919 "bdev_nvme_set_options", 00:06:00.919 "bdev_passthru_delete", 00:06:00.919 "bdev_passthru_create", 00:06:00.919 "bdev_lvol_set_parent_bdev", 00:06:00.919 "bdev_lvol_set_parent", 00:06:00.919 "bdev_lvol_check_shallow_copy", 00:06:00.919 "bdev_lvol_start_shallow_copy", 00:06:00.919 "bdev_lvol_grow_lvstore", 00:06:00.919 "bdev_lvol_get_lvols", 00:06:00.919 "bdev_lvol_get_lvstores", 00:06:00.919 "bdev_lvol_delete", 00:06:00.919 "bdev_lvol_set_read_only", 00:06:00.919 "bdev_lvol_resize", 00:06:00.919 "bdev_lvol_decouple_parent", 00:06:00.919 "bdev_lvol_inflate", 00:06:00.919 "bdev_lvol_rename", 00:06:00.919 "bdev_lvol_clone_bdev", 00:06:00.919 "bdev_lvol_clone", 00:06:00.919 "bdev_lvol_snapshot", 00:06:00.919 "bdev_lvol_create", 00:06:00.919 "bdev_lvol_delete_lvstore", 00:06:00.919 "bdev_lvol_rename_lvstore", 00:06:00.919 "bdev_lvol_create_lvstore", 00:06:00.919 "bdev_raid_set_options", 00:06:00.919 "bdev_raid_remove_base_bdev", 00:06:00.919 "bdev_raid_add_base_bdev", 00:06:00.919 "bdev_raid_delete", 00:06:00.919 "bdev_raid_create", 00:06:00.919 "bdev_raid_get_bdevs", 00:06:00.919 "bdev_error_inject_error", 00:06:00.919 "bdev_error_delete", 00:06:00.919 "bdev_error_create", 00:06:00.919 "bdev_split_delete", 00:06:00.919 "bdev_split_create", 00:06:00.919 "bdev_delay_delete", 00:06:00.919 "bdev_delay_create", 00:06:00.919 "bdev_delay_update_latency", 00:06:00.919 "bdev_zone_block_delete", 00:06:00.919 "bdev_zone_block_create", 00:06:00.919 "blobfs_create", 00:06:00.919 "blobfs_detect", 00:06:00.919 "blobfs_set_cache_size", 00:06:00.919 "bdev_aio_delete", 00:06:00.919 "bdev_aio_rescan", 00:06:00.919 "bdev_aio_create", 00:06:00.919 "bdev_ftl_set_property", 00:06:00.919 "bdev_ftl_get_properties", 00:06:00.919 "bdev_ftl_get_stats", 00:06:00.919 "bdev_ftl_unmap", 00:06:00.919 "bdev_ftl_unload", 00:06:00.919 "bdev_ftl_delete", 00:06:00.919 "bdev_ftl_load", 00:06:00.919 "bdev_ftl_create", 00:06:00.919 "bdev_virtio_attach_controller", 00:06:00.919 "bdev_virtio_scsi_get_devices", 00:06:00.919 "bdev_virtio_detach_controller", 00:06:00.919 "bdev_virtio_blk_set_hotplug", 00:06:00.919 "bdev_iscsi_delete", 00:06:00.919 "bdev_iscsi_create", 00:06:00.919 "bdev_iscsi_set_options", 00:06:00.919 "accel_error_inject_error", 00:06:00.919 "ioat_scan_accel_module", 00:06:00.919 "dsa_scan_accel_module", 00:06:00.919 "iaa_scan_accel_module", 00:06:00.919 "vfu_virtio_create_scsi_endpoint", 00:06:00.919 "vfu_virtio_scsi_remove_target", 00:06:00.919 "vfu_virtio_scsi_add_target", 00:06:00.919 "vfu_virtio_create_blk_endpoint", 00:06:00.919 "vfu_virtio_delete_endpoint", 00:06:00.919 "keyring_file_remove_key", 00:06:00.919 "keyring_file_add_key", 00:06:00.919 "keyring_linux_set_options", 00:06:00.919 "iscsi_get_histogram", 00:06:00.919 "iscsi_enable_histogram", 00:06:00.919 "iscsi_set_options", 00:06:00.919 "iscsi_get_auth_groups", 00:06:00.919 "iscsi_auth_group_remove_secret", 00:06:00.919 "iscsi_auth_group_add_secret", 00:06:00.919 "iscsi_delete_auth_group", 00:06:00.919 "iscsi_create_auth_group", 00:06:00.919 "iscsi_set_discovery_auth", 00:06:00.919 "iscsi_get_options", 00:06:00.919 "iscsi_target_node_request_logout", 00:06:00.919 "iscsi_target_node_set_redirect", 00:06:00.919 "iscsi_target_node_set_auth", 00:06:00.919 "iscsi_target_node_add_lun", 00:06:00.919 "iscsi_get_stats", 00:06:00.919 "iscsi_get_connections", 00:06:00.919 "iscsi_portal_group_set_auth", 00:06:00.919 "iscsi_start_portal_group", 00:06:00.919 "iscsi_delete_portal_group", 00:06:00.919 "iscsi_create_portal_group", 00:06:00.919 "iscsi_get_portal_groups", 00:06:00.919 "iscsi_delete_target_node", 00:06:00.919 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.919 "iscsi_target_node_add_pg_ig_maps", 00:06:00.919 "iscsi_create_target_node", 00:06:00.919 "iscsi_get_target_nodes", 00:06:00.919 "iscsi_delete_initiator_group", 00:06:00.919 "iscsi_initiator_group_remove_initiators", 00:06:00.919 "iscsi_initiator_group_add_initiators", 00:06:00.919 "iscsi_create_initiator_group", 00:06:00.919 "iscsi_get_initiator_groups", 00:06:00.919 "nvmf_set_crdt", 00:06:00.919 "nvmf_set_config", 00:06:00.919 "nvmf_set_max_subsystems", 00:06:00.919 "nvmf_stop_mdns_prr", 00:06:00.919 "nvmf_publish_mdns_prr", 00:06:00.919 "nvmf_subsystem_get_listeners", 00:06:00.919 "nvmf_subsystem_get_qpairs", 00:06:00.919 "nvmf_subsystem_get_controllers", 00:06:00.919 "nvmf_get_stats", 00:06:00.919 "nvmf_get_transports", 00:06:00.919 "nvmf_create_transport", 00:06:00.919 "nvmf_get_targets", 00:06:00.919 "nvmf_delete_target", 00:06:00.919 "nvmf_create_target", 00:06:00.919 "nvmf_subsystem_allow_any_host", 00:06:00.919 "nvmf_subsystem_remove_host", 00:06:00.919 "nvmf_subsystem_add_host", 00:06:00.919 "nvmf_ns_remove_host", 00:06:00.919 "nvmf_ns_add_host", 00:06:00.919 "nvmf_subsystem_remove_ns", 00:06:00.919 "nvmf_subsystem_add_ns", 00:06:00.919 "nvmf_subsystem_listener_set_ana_state", 00:06:00.919 "nvmf_discovery_get_referrals", 00:06:00.919 "nvmf_discovery_remove_referral", 00:06:00.919 "nvmf_discovery_add_referral", 00:06:00.919 "nvmf_subsystem_remove_listener", 00:06:00.919 "nvmf_subsystem_add_listener", 00:06:00.919 "nvmf_delete_subsystem", 00:06:00.919 "nvmf_create_subsystem", 00:06:00.919 "nvmf_get_subsystems", 00:06:00.919 "env_dpdk_get_mem_stats", 00:06:00.919 "nbd_get_disks", 00:06:00.919 "nbd_stop_disk", 00:06:00.919 "nbd_start_disk", 00:06:00.919 "ublk_recover_disk", 00:06:00.919 "ublk_get_disks", 00:06:00.919 "ublk_stop_disk", 00:06:00.919 "ublk_start_disk", 00:06:00.919 "ublk_destroy_target", 00:06:00.919 "ublk_create_target", 00:06:00.919 "virtio_blk_create_transport", 00:06:00.919 "virtio_blk_get_transports", 00:06:00.919 "vhost_controller_set_coalescing", 00:06:00.919 "vhost_get_controllers", 00:06:00.919 "vhost_delete_controller", 00:06:00.919 "vhost_create_blk_controller", 00:06:00.919 "vhost_scsi_controller_remove_target", 00:06:00.919 "vhost_scsi_controller_add_target", 00:06:00.919 "vhost_start_scsi_controller", 00:06:00.919 "vhost_create_scsi_controller", 00:06:00.919 "thread_set_cpumask", 00:06:00.919 "framework_get_scheduler", 00:06:00.919 "framework_set_scheduler", 00:06:00.919 "framework_get_reactors", 00:06:00.919 "thread_get_io_channels", 00:06:00.919 "thread_get_pollers", 00:06:00.919 "thread_get_stats", 00:06:00.919 "framework_monitor_context_switch", 00:06:00.919 "spdk_kill_instance", 00:06:00.919 "log_enable_timestamps", 00:06:00.919 "log_get_flags", 00:06:00.919 "log_clear_flag", 00:06:00.919 "log_set_flag", 00:06:00.919 "log_get_level", 00:06:00.919 "log_set_level", 00:06:00.919 "log_get_print_level", 00:06:00.919 "log_set_print_level", 00:06:00.919 "framework_enable_cpumask_locks", 00:06:00.919 "framework_disable_cpumask_locks", 00:06:00.919 "framework_wait_init", 00:06:00.919 "framework_start_init", 00:06:00.919 "scsi_get_devices", 00:06:00.919 "bdev_get_histogram", 00:06:00.919 "bdev_enable_histogram", 00:06:00.919 "bdev_set_qos_limit", 00:06:00.919 "bdev_set_qd_sampling_period", 00:06:00.919 "bdev_get_bdevs", 00:06:00.919 "bdev_reset_iostat", 00:06:00.919 "bdev_get_iostat", 00:06:00.919 "bdev_examine", 00:06:00.919 "bdev_wait_for_examine", 00:06:00.919 "bdev_set_options", 00:06:00.919 "notify_get_notifications", 00:06:00.919 "notify_get_types", 00:06:00.919 "accel_get_stats", 00:06:00.919 "accel_set_options", 00:06:00.919 "accel_set_driver", 00:06:00.919 "accel_crypto_key_destroy", 00:06:00.919 "accel_crypto_keys_get", 00:06:00.919 "accel_crypto_key_create", 00:06:00.919 "accel_assign_opc", 00:06:00.920 "accel_get_module_info", 00:06:00.920 "accel_get_opc_assignments", 00:06:00.920 "vmd_rescan", 00:06:00.920 "vmd_remove_device", 00:06:00.920 "vmd_enable", 00:06:00.920 "sock_get_default_impl", 00:06:00.920 "sock_set_default_impl", 00:06:00.920 "sock_impl_set_options", 00:06:00.920 "sock_impl_get_options", 00:06:00.920 "iobuf_get_stats", 00:06:00.920 "iobuf_set_options", 00:06:00.920 "keyring_get_keys", 00:06:00.920 "framework_get_pci_devices", 00:06:00.920 "framework_get_config", 00:06:00.920 "framework_get_subsystems", 00:06:00.920 "vfu_tgt_set_base_path", 00:06:00.920 "trace_get_info", 00:06:00.920 "trace_get_tpoint_group_mask", 00:06:00.920 "trace_disable_tpoint_group", 00:06:00.920 "trace_enable_tpoint_group", 00:06:00.920 "trace_clear_tpoint_mask", 00:06:00.920 "trace_set_tpoint_mask", 00:06:00.920 "spdk_get_version", 00:06:00.920 "rpc_get_methods" 00:06:00.920 ] 00:06:00.920 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.920 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.920 23:46:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 658212 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 658212 ']' 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 658212 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.920 23:46:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 658212 00:06:01.177 23:46:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.177 23:46:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.177 23:46:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 658212' 00:06:01.177 killing process with pid 658212 00:06:01.177 23:46:06 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 658212 00:06:01.177 23:46:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 658212 00:06:01.435 00:06:01.435 real 0m1.245s 00:06:01.435 user 0m2.212s 00:06:01.435 sys 0m0.449s 00:06:01.435 23:46:07 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.435 23:46:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.435 ************************************ 00:06:01.435 END TEST spdkcli_tcp 00:06:01.435 ************************************ 00:06:01.435 23:46:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.435 23:46:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.435 23:46:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.435 23:46:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.436 ************************************ 00:06:01.436 START TEST dpdk_mem_utility 00:06:01.436 ************************************ 00:06:01.436 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.436 * Looking for test storage... 00:06:01.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.694 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.694 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=658420 00:06:01.694 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.694 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 658420 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 658420 ']' 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.694 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.694 [2024-07-24 23:46:07.203580] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:01.694 [2024-07-24 23:46:07.203660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658420 ] 00:06:01.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.694 [2024-07-24 23:46:07.259926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.694 [2024-07-24 23:46:07.345135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.952 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.952 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:01.952 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.952 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.952 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.952 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.952 { 00:06:01.952 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.952 } 00:06:01.952 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.952 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.952 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:01.952 1 heaps totaling size 814.000000 MiB 00:06:01.952 size: 814.000000 MiB heap id: 0 00:06:01.952 end heaps---------- 00:06:01.952 8 mempools totaling size 598.116089 MiB 00:06:01.952 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.952 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.952 size: 84.521057 MiB name: bdev_io_658420 00:06:01.952 size: 51.011292 MiB name: evtpool_658420 00:06:01.952 size: 50.003479 MiB name: msgpool_658420 00:06:01.952 size: 21.763794 MiB name: PDU_Pool 00:06:01.952 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.952 size: 0.026123 MiB name: Session_Pool 00:06:01.952 end mempools------- 00:06:01.952 6 memzones totaling size 4.142822 MiB 00:06:01.952 size: 1.000366 MiB name: RG_ring_0_658420 00:06:01.952 size: 1.000366 MiB name: RG_ring_1_658420 00:06:01.952 size: 1.000366 MiB name: RG_ring_4_658420 00:06:01.952 size: 1.000366 MiB name: RG_ring_5_658420 00:06:01.952 size: 0.125366 MiB name: RG_ring_2_658420 00:06:01.952 size: 0.015991 MiB name: RG_ring_3_658420 00:06:01.952 end memzones------- 00:06:01.952 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.211 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:02.211 list of free elements. size: 12.519348 MiB 00:06:02.211 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.211 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.211 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.211 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.211 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.211 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.211 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.211 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.211 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:02.211 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:02.211 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:02.211 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:02.211 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.211 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:02.211 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:02.211 list of standard malloc elements. size: 199.218079 MiB 00:06:02.211 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.211 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.211 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.211 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.211 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.211 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.211 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.211 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.211 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.211 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.211 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.211 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.211 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.211 list of memzone associated elements. size: 602.262573 MiB 00:06:02.211 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.211 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.211 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.211 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.211 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.211 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_658420_0 00:06:02.211 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.211 associated memzone info: size: 48.002930 MiB name: MP_evtpool_658420_0 00:06:02.211 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.211 associated memzone info: size: 48.002930 MiB name: MP_msgpool_658420_0 00:06:02.211 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.211 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.211 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.211 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.211 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.211 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_658420 00:06:02.211 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.211 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_658420 00:06:02.211 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.211 associated memzone info: size: 1.007996 MiB name: MP_evtpool_658420 00:06:02.211 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.212 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.212 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.212 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.212 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.212 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.212 associated memzone info: size: 1.000366 MiB name: RG_ring_0_658420 00:06:02.212 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.212 associated memzone info: size: 1.000366 MiB name: RG_ring_1_658420 00:06:02.212 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.212 associated memzone info: size: 1.000366 MiB name: RG_ring_4_658420 00:06:02.212 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.212 associated memzone info: size: 1.000366 MiB name: RG_ring_5_658420 00:06:02.212 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.212 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_658420 00:06:02.212 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.212 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.212 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.212 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.212 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.212 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.212 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.212 associated memzone info: size: 0.125366 MiB name: RG_ring_2_658420 00:06:02.212 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.212 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.212 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:02.212 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.212 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.212 associated memzone info: size: 0.015991 MiB name: RG_ring_3_658420 00:06:02.212 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:02.212 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.212 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:02.212 associated memzone info: size: 0.000183 MiB name: MP_msgpool_658420 00:06:02.212 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.212 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_658420 00:06:02.212 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:02.212 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.212 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.212 23:46:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 658420 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 658420 ']' 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 658420 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 658420 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 658420' 00:06:02.212 killing process with pid 658420 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 658420 00:06:02.212 23:46:07 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 658420 00:06:02.470 00:06:02.470 real 0m1.050s 00:06:02.470 user 0m0.999s 00:06:02.470 sys 0m0.408s 00:06:02.470 23:46:08 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.470 23:46:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.470 ************************************ 00:06:02.470 END TEST dpdk_mem_utility 00:06:02.470 ************************************ 00:06:02.470 23:46:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.470 23:46:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.470 23:46:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.470 23:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.728 ************************************ 00:06:02.728 START TEST event 00:06:02.728 ************************************ 00:06:02.728 23:46:08 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.728 * Looking for test storage... 00:06:02.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.728 23:46:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.728 23:46:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.728 23:46:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.728 23:46:08 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:02.728 23:46:08 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.728 23:46:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.728 ************************************ 00:06:02.728 START TEST event_perf 00:06:02.728 ************************************ 00:06:02.728 23:46:08 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.728 Running I/O for 1 seconds...[2024-07-24 23:46:08.292299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:02.728 [2024-07-24 23:46:08.292361] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658608 ] 00:06:02.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.728 [2024-07-24 23:46:08.354734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.986 [2024-07-24 23:46:08.449751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.986 [2024-07-24 23:46:08.449804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.986 [2024-07-24 23:46:08.449918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.986 [2024-07-24 23:46:08.449920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.918 Running I/O for 1 seconds... 00:06:03.918 lcore 0: 229270 00:06:03.918 lcore 1: 229269 00:06:03.918 lcore 2: 229269 00:06:03.918 lcore 3: 229268 00:06:03.918 done. 00:06:03.918 00:06:03.918 real 0m1.256s 00:06:03.918 user 0m4.159s 00:06:03.918 sys 0m0.092s 00:06:03.918 23:46:09 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.918 23:46:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.918 ************************************ 00:06:03.918 END TEST event_perf 00:06:03.918 ************************************ 00:06:03.918 23:46:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.918 23:46:09 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:03.918 23:46:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.918 23:46:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.918 ************************************ 00:06:03.918 START TEST event_reactor 00:06:03.918 ************************************ 00:06:03.918 23:46:09 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.918 [2024-07-24 23:46:09.594831] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:03.918 [2024-07-24 23:46:09.594895] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658763 ] 00:06:03.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.176 [2024-07-24 23:46:09.655716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.176 [2024-07-24 23:46:09.751179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.548 test_start 00:06:05.548 oneshot 00:06:05.548 tick 100 00:06:05.548 tick 100 00:06:05.548 tick 250 00:06:05.548 tick 100 00:06:05.548 tick 100 00:06:05.548 tick 100 00:06:05.548 tick 250 00:06:05.548 tick 500 00:06:05.548 tick 100 00:06:05.548 tick 100 00:06:05.548 tick 250 00:06:05.548 tick 100 00:06:05.548 tick 100 00:06:05.548 test_end 00:06:05.548 00:06:05.548 real 0m1.248s 00:06:05.548 user 0m1.160s 00:06:05.548 sys 0m0.083s 00:06:05.548 23:46:10 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.548 23:46:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.548 ************************************ 00:06:05.548 END TEST event_reactor 00:06:05.548 ************************************ 00:06:05.548 23:46:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.548 23:46:10 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:05.548 23:46:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.548 23:46:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.548 ************************************ 00:06:05.548 START TEST event_reactor_perf 00:06:05.548 ************************************ 00:06:05.548 23:46:10 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.548 [2024-07-24 23:46:10.891983] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:05.548 [2024-07-24 23:46:10.892050] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658925 ] 00:06:05.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.548 [2024-07-24 23:46:10.953607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.548 [2024-07-24 23:46:11.049069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.481 test_start 00:06:06.481 test_end 00:06:06.481 Performance: 354389 events per second 00:06:06.481 00:06:06.481 real 0m1.255s 00:06:06.481 user 0m1.157s 00:06:06.481 sys 0m0.093s 00:06:06.481 23:46:12 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.481 23:46:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.481 ************************************ 00:06:06.481 END TEST event_reactor_perf 00:06:06.481 ************************************ 00:06:06.481 23:46:12 event -- event/event.sh@49 -- # uname -s 00:06:06.481 23:46:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.481 23:46:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.481 23:46:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.481 23:46:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.481 23:46:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.481 ************************************ 00:06:06.481 START TEST event_scheduler 00:06:06.481 ************************************ 00:06:06.481 23:46:12 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.740 * Looking for test storage... 00:06:06.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:06.740 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.740 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=659111 00:06:06.740 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.740 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.740 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 659111 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 659111 ']' 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.740 23:46:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.740 [2024-07-24 23:46:12.281343] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:06.740 [2024-07-24 23:46:12.281428] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659111 ] 00:06:06.740 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.740 [2024-07-24 23:46:12.337584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.740 [2024-07-24 23:46:12.426095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.740 [2024-07-24 23:46:12.426160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.740 [2024-07-24 23:46:12.426186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.740 [2024-07-24 23:46:12.426189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.998 23:46:12 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.998 23:46:12 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:06.998 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.998 23:46:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.998 23:46:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.998 POWER: Env isn't set yet! 00:06:06.998 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:06.998 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:06.998 POWER: Cannot get available frequencies of lcore 0 00:06:06.998 POWER: Attempting to initialise PSTAT power management... 00:06:06.998 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:06.998 POWER: Initialized successfully for lcore 0 power management 00:06:06.998 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:06.998 POWER: Initialized successfully for lcore 1 power management 00:06:06.998 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:06.998 POWER: Initialized successfully for lcore 2 power management 00:06:06.998 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:06.998 POWER: Initialized successfully for lcore 3 power management 00:06:06.998 [2024-07-24 23:46:12.527293] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.998 [2024-07-24 23:46:12.527311] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.998 [2024-07-24 23:46:12.527322] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 [2024-07-24 23:46:12.629512] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 ************************************ 00:06:06.999 START TEST scheduler_create_thread 00:06:06.999 ************************************ 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 2 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 3 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 4 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 5 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.999 6 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.999 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 7 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 8 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 9 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 10 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.257 23:46:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.629 23:46:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.629 23:46:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.629 23:46:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.629 23:46:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.629 23:46:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.562 23:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.562 00:06:09.562 real 0m2.617s 00:06:09.562 user 0m0.013s 00:06:09.562 sys 0m0.002s 00:06:09.562 23:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.562 23:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.562 ************************************ 00:06:09.562 END TEST scheduler_create_thread 00:06:09.562 ************************************ 00:06:09.819 23:46:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.819 23:46:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 659111 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 659111 ']' 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 659111 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 659111 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 659111' 00:06:09.819 killing process with pid 659111 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 659111 00:06:09.819 23:46:15 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 659111 00:06:10.076 [2024-07-24 23:46:15.756692] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.335 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:10.335 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:10.335 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:10.335 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:10.335 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:10.335 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:10.335 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:10.335 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:10.335 00:06:10.335 real 0m3.807s 00:06:10.335 user 0m5.826s 00:06:10.335 sys 0m0.318s 00:06:10.335 23:46:15 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.335 23:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.335 ************************************ 00:06:10.335 END TEST event_scheduler 00:06:10.335 ************************************ 00:06:10.335 23:46:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.335 23:46:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.335 23:46:16 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.335 23:46:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.335 23:46:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.335 ************************************ 00:06:10.335 START TEST app_repeat 00:06:10.335 ************************************ 00:06:10.335 23:46:16 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.335 23:46:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=659675 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 659675' 00:06:10.594 Process app_repeat pid: 659675 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.594 spdk_app_start Round 0 00:06:10.594 23:46:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 659675 /var/tmp/spdk-nbd.sock 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 659675 ']' 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.594 23:46:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.594 [2024-07-24 23:46:16.071404] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:10.594 [2024-07-24 23:46:16.071474] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659675 ] 00:06:10.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.594 [2024-07-24 23:46:16.132339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.594 [2024-07-24 23:46:16.226378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.594 [2024-07-24 23:46:16.226405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.851 23:46:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.851 23:46:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:10.852 23:46:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.109 Malloc0 00:06:11.109 23:46:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.384 Malloc1 00:06:11.384 23:46:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.384 23:46:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.641 /dev/nbd0 00:06:11.641 23:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.641 23:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:11.641 23:46:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.641 1+0 records in 00:06:11.641 1+0 records out 00:06:11.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150162 s, 27.3 MB/s 00:06:11.642 23:46:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.642 23:46:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:11.642 23:46:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.642 23:46:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:11.642 23:46:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:11.642 23:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.642 23:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.642 23:46:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.899 /dev/nbd1 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.899 1+0 records in 00:06:11.899 1+0 records out 00:06:11.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208724 s, 19.6 MB/s 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:11.899 23:46:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.899 23:46:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.157 23:46:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.157 { 00:06:12.157 "nbd_device": "/dev/nbd0", 00:06:12.157 "bdev_name": "Malloc0" 00:06:12.157 }, 00:06:12.157 { 00:06:12.157 "nbd_device": "/dev/nbd1", 00:06:12.157 "bdev_name": "Malloc1" 00:06:12.157 } 00:06:12.157 ]' 00:06:12.157 23:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.157 { 00:06:12.157 "nbd_device": "/dev/nbd0", 00:06:12.157 "bdev_name": "Malloc0" 00:06:12.158 }, 00:06:12.158 { 00:06:12.158 "nbd_device": "/dev/nbd1", 00:06:12.158 "bdev_name": "Malloc1" 00:06:12.158 } 00:06:12.158 ]' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.158 /dev/nbd1' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.158 /dev/nbd1' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.158 256+0 records in 00:06:12.158 256+0 records out 00:06:12.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490905 s, 214 MB/s 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.158 256+0 records in 00:06:12.158 256+0 records out 00:06:12.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021139 s, 49.6 MB/s 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.158 256+0 records in 00:06:12.158 256+0 records out 00:06:12.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240594 s, 43.6 MB/s 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.158 23:46:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.415 23:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.415 23:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.415 23:46:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.415 23:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.415 23:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.416 23:46:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.416 23:46:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.416 23:46:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.416 23:46:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.416 23:46:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.697 23:46:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.698 23:46:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.698 23:46:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.698 23:46:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.698 23:46:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.698 23:46:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.960 23:46:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.960 23:46:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.218 23:46:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.476 [2024-07-24 23:46:19.119069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.734 [2024-07-24 23:46:19.211218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.734 [2024-07-24 23:46:19.211218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.734 [2024-07-24 23:46:19.274168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.734 [2024-07-24 23:46:19.274230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.260 23:46:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.260 23:46:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.260 spdk_app_start Round 1 00:06:16.260 23:46:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 659675 /var/tmp/spdk-nbd.sock 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 659675 ']' 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.260 23:46:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.517 23:46:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.517 23:46:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:16.517 23:46:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.776 Malloc0 00:06:16.776 23:46:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.034 Malloc1 00:06:17.035 23:46:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.035 23:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.292 /dev/nbd0 00:06:17.292 23:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.292 23:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.292 23:46:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.293 1+0 records in 00:06:17.293 1+0 records out 00:06:17.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192691 s, 21.3 MB/s 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:17.293 23:46:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:17.293 23:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.293 23:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.293 23:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.550 /dev/nbd1 00:06:17.550 23:46:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.550 23:46:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.550 1+0 records in 00:06:17.550 1+0 records out 00:06:17.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228283 s, 17.9 MB/s 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:17.550 23:46:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:17.550 23:46:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.550 23:46:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.550 23:46:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.551 23:46:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.551 23:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.809 { 00:06:17.809 "nbd_device": "/dev/nbd0", 00:06:17.809 "bdev_name": "Malloc0" 00:06:17.809 }, 00:06:17.809 { 00:06:17.809 "nbd_device": "/dev/nbd1", 00:06:17.809 "bdev_name": "Malloc1" 00:06:17.809 } 00:06:17.809 ]' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.809 { 00:06:17.809 "nbd_device": "/dev/nbd0", 00:06:17.809 "bdev_name": "Malloc0" 00:06:17.809 }, 00:06:17.809 { 00:06:17.809 "nbd_device": "/dev/nbd1", 00:06:17.809 "bdev_name": "Malloc1" 00:06:17.809 } 00:06:17.809 ]' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.809 /dev/nbd1' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.809 /dev/nbd1' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.809 256+0 records in 00:06:17.809 256+0 records out 00:06:17.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509665 s, 206 MB/s 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.809 23:46:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.067 256+0 records in 00:06:18.067 256+0 records out 00:06:18.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223414 s, 46.9 MB/s 00:06:18.067 23:46:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.067 23:46:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.067 256+0 records in 00:06:18.068 256+0 records out 00:06:18.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022345 s, 46.9 MB/s 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.068 23:46:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.325 23:46:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.582 23:46:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.840 23:46:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.840 23:46:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.097 23:46:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.357 [2024-07-24 23:46:24.910779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.357 [2024-07-24 23:46:25.001510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.357 [2024-07-24 23:46:25.001515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.357 [2024-07-24 23:46:25.063952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.357 [2024-07-24 23:46:25.064025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.696 23:46:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.696 23:46:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.696 spdk_app_start Round 2 00:06:22.696 23:46:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 659675 /var/tmp/spdk-nbd.sock 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 659675 ']' 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.696 23:46:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:22.696 23:46:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.696 Malloc0 00:06:22.696 23:46:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.954 Malloc1 00:06:22.954 23:46:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.954 23:46:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.212 /dev/nbd0 00:06:23.212 23:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.212 23:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.212 1+0 records in 00:06:23.212 1+0 records out 00:06:23.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214274 s, 19.1 MB/s 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.212 23:46:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.212 23:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.212 23:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.212 23:46:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.469 /dev/nbd1 00:06:23.469 23:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.469 23:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.469 23:46:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.469 1+0 records in 00:06:23.469 1+0 records out 00:06:23.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189549 s, 21.6 MB/s 00:06:23.469 23:46:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.469 23:46:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.469 23:46:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.469 23:46:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.469 23:46:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.469 23:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.469 23:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.469 23:46:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.469 23:46:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.469 23:46:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.727 { 00:06:23.727 "nbd_device": "/dev/nbd0", 00:06:23.727 "bdev_name": "Malloc0" 00:06:23.727 }, 00:06:23.727 { 00:06:23.727 "nbd_device": "/dev/nbd1", 00:06:23.727 "bdev_name": "Malloc1" 00:06:23.727 } 00:06:23.727 ]' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.727 { 00:06:23.727 "nbd_device": "/dev/nbd0", 00:06:23.727 "bdev_name": "Malloc0" 00:06:23.727 }, 00:06:23.727 { 00:06:23.727 "nbd_device": "/dev/nbd1", 00:06:23.727 "bdev_name": "Malloc1" 00:06:23.727 } 00:06:23.727 ]' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.727 /dev/nbd1' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.727 /dev/nbd1' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.727 256+0 records in 00:06:23.727 256+0 records out 00:06:23.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495691 s, 212 MB/s 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.727 256+0 records in 00:06:23.727 256+0 records out 00:06:23.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203538 s, 51.5 MB/s 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.727 256+0 records in 00:06:23.727 256+0 records out 00:06:23.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246396 s, 42.6 MB/s 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.727 23:46:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.985 23:46:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.241 23:46:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.242 23:46:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.242 23:46:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.242 23:46:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.242 23:46:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.242 23:46:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.499 23:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.499 23:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.499 23:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.499 23:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.499 23:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.756 23:46:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.756 23:46:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.014 23:46:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.014 [2024-07-24 23:46:30.724435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.271 [2024-07-24 23:46:30.817645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.271 [2024-07-24 23:46:30.817650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.271 [2024-07-24 23:46:30.882096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.271 [2024-07-24 23:46:30.882183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.795 23:46:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 659675 /var/tmp/spdk-nbd.sock 00:06:27.795 23:46:33 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 659675 ']' 00:06:27.795 23:46:33 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.795 23:46:33 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.795 23:46:33 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.796 23:46:33 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.796 23:46:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:28.053 23:46:33 event.app_repeat -- event/event.sh@39 -- # killprocess 659675 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 659675 ']' 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 659675 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.053 23:46:33 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 659675 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 659675' 00:06:28.311 killing process with pid 659675 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@965 -- # kill 659675 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@970 -- # wait 659675 00:06:28.311 spdk_app_start is called in Round 0. 00:06:28.311 Shutdown signal received, stop current app iteration 00:06:28.311 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:28.311 spdk_app_start is called in Round 1. 00:06:28.311 Shutdown signal received, stop current app iteration 00:06:28.311 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:28.311 spdk_app_start is called in Round 2. 00:06:28.311 Shutdown signal received, stop current app iteration 00:06:28.311 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:28.311 spdk_app_start is called in Round 3. 00:06:28.311 Shutdown signal received, stop current app iteration 00:06:28.311 23:46:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:28.311 23:46:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:28.311 00:06:28.311 real 0m17.930s 00:06:28.311 user 0m38.987s 00:06:28.311 sys 0m3.224s 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.311 23:46:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.311 ************************************ 00:06:28.311 END TEST app_repeat 00:06:28.311 ************************************ 00:06:28.311 23:46:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:28.311 23:46:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:28.311 23:46:33 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.311 23:46:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.311 23:46:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 ************************************ 00:06:28.569 START TEST cpu_locks 00:06:28.569 ************************************ 00:06:28.569 23:46:34 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:28.569 * Looking for test storage... 00:06:28.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:28.569 23:46:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.569 23:46:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.569 23:46:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.569 23:46:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.569 23:46:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.569 23:46:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.569 23:46:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 ************************************ 00:06:28.569 START TEST default_locks 00:06:28.569 ************************************ 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=662044 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 662044 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 662044 ']' 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.569 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 [2024-07-24 23:46:34.154251] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:28.569 [2024-07-24 23:46:34.154342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662044 ] 00:06:28.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.569 [2024-07-24 23:46:34.210449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.826 [2024-07-24 23:46:34.298576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.083 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.083 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:29.083 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 662044 00:06:29.083 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 662044 00:06:29.083 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.341 lslocks: write error 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 662044 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 662044 ']' 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 662044 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662044 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662044' 00:06:29.341 killing process with pid 662044 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 662044 00:06:29.341 23:46:34 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 662044 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 662044 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 662044 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 662044 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 662044 ']' 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (662044) - No such process 00:06:29.598 ERROR: process (pid: 662044) is no longer running 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:29.598 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.599 00:06:29.599 real 0m1.187s 00:06:29.599 user 0m1.121s 00:06:29.599 sys 0m0.530s 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.599 23:46:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.599 ************************************ 00:06:29.599 END TEST default_locks 00:06:29.599 ************************************ 00:06:29.599 23:46:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.599 23:46:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.599 23:46:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.599 23:46:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 ************************************ 00:06:29.856 START TEST default_locks_via_rpc 00:06:29.856 ************************************ 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=662206 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 662206 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 662206 ']' 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.856 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 [2024-07-24 23:46:35.387742] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:29.856 [2024-07-24 23:46:35.387832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662206 ] 00:06:29.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.856 [2024-07-24 23:46:35.451191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.856 [2024-07-24 23:46:35.543840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 662206 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 662206 00:06:30.114 23:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 662206 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 662206 ']' 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 662206 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.371 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662206 00:06:30.628 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.628 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.628 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662206' 00:06:30.628 killing process with pid 662206 00:06:30.628 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 662206 00:06:30.628 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 662206 00:06:30.887 00:06:30.887 real 0m1.156s 00:06:30.887 user 0m1.103s 00:06:30.887 sys 0m0.536s 00:06:30.887 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.887 23:46:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.887 ************************************ 00:06:30.887 END TEST default_locks_via_rpc 00:06:30.887 ************************************ 00:06:30.887 23:46:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.887 23:46:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.887 23:46:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.887 23:46:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.887 ************************************ 00:06:30.887 START TEST non_locking_app_on_locked_coremask 00:06:30.887 ************************************ 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=662376 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 662376 /var/tmp/spdk.sock 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 662376 ']' 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.887 23:46:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.887 [2024-07-24 23:46:36.595609] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.887 [2024-07-24 23:46:36.595687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662376 ] 00:06:31.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.145 [2024-07-24 23:46:36.650616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.145 [2024-07-24 23:46:36.741533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=662382 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 662382 /var/tmp/spdk2.sock 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 662382 ']' 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.403 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.403 [2024-07-24 23:46:37.060014] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:31.403 [2024-07-24 23:46:37.060109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662382 ] 00:06:31.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.661 [2024-07-24 23:46:37.155219] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.661 [2024-07-24 23:46:37.155263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.661 [2024-07-24 23:46:37.341239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.594 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.594 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:32.594 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 662376 00:06:32.595 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 662376 00:06:32.595 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.852 lslocks: write error 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 662376 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 662376 ']' 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 662376 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662376 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662376' 00:06:32.852 killing process with pid 662376 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 662376 00:06:32.852 23:46:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 662376 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 662382 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 662382 ']' 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 662382 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662382 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662382' 00:06:33.785 killing process with pid 662382 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 662382 00:06:33.785 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 662382 00:06:34.043 00:06:34.043 real 0m3.147s 00:06:34.043 user 0m3.252s 00:06:34.043 sys 0m1.041s 00:06:34.043 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.043 23:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.043 ************************************ 00:06:34.043 END TEST non_locking_app_on_locked_coremask 00:06:34.043 ************************************ 00:06:34.043 23:46:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.043 23:46:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.043 23:46:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.043 23:46:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.043 ************************************ 00:06:34.043 START TEST locking_app_on_unlocked_coremask 00:06:34.043 ************************************ 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=662770 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 662770 /var/tmp/spdk.sock 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 662770 ']' 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.043 23:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.301 [2024-07-24 23:46:39.789739] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.301 [2024-07-24 23:46:39.789831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662770 ] 00:06:34.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.301 [2024-07-24 23:46:39.846727] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.301 [2024-07-24 23:46:39.846762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.301 [2024-07-24 23:46:39.936208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=662816 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 662816 /var/tmp/spdk2.sock 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 662816 ']' 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.559 23:46:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.559 [2024-07-24 23:46:40.246307] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.559 [2024-07-24 23:46:40.246393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662816 ] 00:06:34.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.817 [2024-07-24 23:46:40.336735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.817 [2024-07-24 23:46:40.526493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.750 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.750 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:35.750 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 662816 00:06:35.750 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 662816 00:06:35.750 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.007 lslocks: write error 00:06:36.007 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 662770 00:06:36.007 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 662770 ']' 00:06:36.007 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 662770 00:06:36.007 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:36.007 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662770 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662770' 00:06:36.008 killing process with pid 662770 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 662770 00:06:36.008 23:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 662770 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 662816 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 662816 ']' 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 662816 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 662816 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 662816' 00:06:36.941 killing process with pid 662816 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 662816 00:06:36.941 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 662816 00:06:37.198 00:06:37.198 real 0m3.134s 00:06:37.198 user 0m3.256s 00:06:37.198 sys 0m1.041s 00:06:37.198 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.198 23:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.198 ************************************ 00:06:37.198 END TEST locking_app_on_unlocked_coremask 00:06:37.198 ************************************ 00:06:37.198 23:46:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.198 23:46:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.198 23:46:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.198 23:46:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.456 ************************************ 00:06:37.456 START TEST locking_app_on_locked_coremask 00:06:37.456 ************************************ 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=663124 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 663124 /var/tmp/spdk.sock 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 663124 ']' 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.456 23:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.456 [2024-07-24 23:46:42.969731] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:37.456 [2024-07-24 23:46:42.969819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663124 ] 00:06:37.457 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.457 [2024-07-24 23:46:43.034936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.457 [2024-07-24 23:46:43.130363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.714 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.714 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:37.714 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=663249 00:06:37.714 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 663249 /var/tmp/spdk2.sock 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 663249 /var/tmp/spdk2.sock 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 663249 /var/tmp/spdk2.sock 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 663249 ']' 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.715 23:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.972 [2024-07-24 23:46:43.442704] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:37.972 [2024-07-24 23:46:43.442785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663249 ] 00:06:37.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.972 [2024-07-24 23:46:43.537263] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 663124 has claimed it. 00:06:37.972 [2024-07-24 23:46:43.537318] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (663249) - No such process 00:06:38.536 ERROR: process (pid: 663249) is no longer running 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.536 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 663124 00:06:38.537 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 663124 00:06:38.537 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.794 lslocks: write error 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 663124 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 663124 ']' 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 663124 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 663124 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 663124' 00:06:38.794 killing process with pid 663124 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 663124 00:06:38.794 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 663124 00:06:39.360 00:06:39.360 real 0m1.997s 00:06:39.360 user 0m2.140s 00:06:39.360 sys 0m0.639s 00:06:39.360 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.360 23:46:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.360 ************************************ 00:06:39.360 END TEST locking_app_on_locked_coremask 00:06:39.360 ************************************ 00:06:39.360 23:46:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.360 23:46:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.360 23:46:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.360 23:46:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.360 ************************************ 00:06:39.360 START TEST locking_overlapped_coremask 00:06:39.360 ************************************ 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=663418 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 663418 /var/tmp/spdk.sock 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 663418 ']' 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.360 23:46:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.360 [2024-07-24 23:46:45.012770] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.360 [2024-07-24 23:46:45.012863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663418 ] 00:06:39.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.360 [2024-07-24 23:46:45.075347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.618 [2024-07-24 23:46:45.174288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.618 [2024-07-24 23:46:45.174343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.618 [2024-07-24 23:46:45.174361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=663508 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 663508 /var/tmp/spdk2.sock 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 663508 /var/tmp/spdk2.sock 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 663508 /var/tmp/spdk2.sock 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 663508 ']' 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.876 23:46:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 [2024-07-24 23:46:45.470362] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.876 [2024-07-24 23:46:45.470472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663508 ] 00:06:39.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.876 [2024-07-24 23:46:45.557604] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 663418 has claimed it. 00:06:39.876 [2024-07-24 23:46:45.557665] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (663508) - No such process 00:06:40.808 ERROR: process (pid: 663508) is no longer running 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 663418 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 663418 ']' 00:06:40.808 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 663418 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 663418 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 663418' 00:06:40.809 killing process with pid 663418 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 663418 00:06:40.809 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 663418 00:06:41.067 00:06:41.067 real 0m1.637s 00:06:41.067 user 0m4.432s 00:06:41.067 sys 0m0.451s 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 ************************************ 00:06:41.067 END TEST locking_overlapped_coremask 00:06:41.067 ************************************ 00:06:41.067 23:46:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.067 23:46:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.067 23:46:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.067 23:46:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 ************************************ 00:06:41.067 START TEST locking_overlapped_coremask_via_rpc 00:06:41.067 ************************************ 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=663713 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 663713 /var/tmp/spdk.sock 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 663713 ']' 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.067 23:46:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 [2024-07-24 23:46:46.696768] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:41.067 [2024-07-24 23:46:46.696840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663713 ] 00:06:41.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.067 [2024-07-24 23:46:46.752482] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.067 [2024-07-24 23:46:46.752519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.325 [2024-07-24 23:46:46.842011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.325 [2024-07-24 23:46:46.842074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.325 [2024-07-24 23:46:46.842077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=663723 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 663723 /var/tmp/spdk2.sock 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 663723 ']' 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.583 23:46:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.583 [2024-07-24 23:46:47.143798] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:41.583 [2024-07-24 23:46:47.143886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663723 ] 00:06:41.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.583 [2024-07-24 23:46:47.229185] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.583 [2024-07-24 23:46:47.229219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.840 [2024-07-24 23:46:47.407248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.840 [2024-07-24 23:46:47.411192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.840 [2024-07-24 23:46:47.411195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.405 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.405 [2024-07-24 23:46:48.089209] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 663713 has claimed it. 00:06:42.405 request: 00:06:42.406 { 00:06:42.406 "method": "framework_enable_cpumask_locks", 00:06:42.406 "req_id": 1 00:06:42.406 } 00:06:42.406 Got JSON-RPC error response 00:06:42.406 response: 00:06:42.406 { 00:06:42.406 "code": -32603, 00:06:42.406 "message": "Failed to claim CPU core: 2" 00:06:42.406 } 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 663713 /var/tmp/spdk.sock 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 663713 ']' 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.406 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.661 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:42.661 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 663723 /var/tmp/spdk2.sock 00:06:42.661 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 663723 ']' 00:06:42.661 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.662 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.662 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.662 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.662 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.918 00:06:42.918 real 0m1.950s 00:06:42.918 user 0m1.030s 00:06:42.918 sys 0m0.153s 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.918 23:46:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.918 ************************************ 00:06:42.918 END TEST locking_overlapped_coremask_via_rpc 00:06:42.918 ************************************ 00:06:42.918 23:46:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.918 23:46:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 663713 ]] 00:06:42.918 23:46:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 663713 00:06:42.918 23:46:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 663713 ']' 00:06:42.918 23:46:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 663713 00:06:42.918 23:46:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:42.918 23:46:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.918 23:46:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 663713 00:06:43.176 23:46:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.176 23:46:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.176 23:46:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 663713' 00:06:43.176 killing process with pid 663713 00:06:43.176 23:46:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 663713 00:06:43.176 23:46:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 663713 00:06:43.433 23:46:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 663723 ]] 00:06:43.433 23:46:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 663723 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 663723 ']' 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 663723 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 663723 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 663723' 00:06:43.433 killing process with pid 663723 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 663723 00:06:43.433 23:46:49 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 663723 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 663713 ]] 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 663713 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 663713 ']' 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 663713 00:06:43.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (663713) - No such process 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 663713 is not found' 00:06:43.996 Process with pid 663713 is not found 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 663723 ]] 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 663723 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 663723 ']' 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 663723 00:06:43.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (663723) - No such process 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 663723 is not found' 00:06:43.996 Process with pid 663723 is not found 00:06:43.996 23:46:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.996 00:06:43.996 real 0m15.471s 00:06:43.996 user 0m27.056s 00:06:43.996 sys 0m5.287s 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.996 23:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 ************************************ 00:06:43.996 END TEST cpu_locks 00:06:43.996 ************************************ 00:06:43.996 00:06:43.996 real 0m41.322s 00:06:43.996 user 1m18.487s 00:06:43.996 sys 0m9.334s 00:06:43.996 23:46:49 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.996 23:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 ************************************ 00:06:43.996 END TEST event 00:06:43.996 ************************************ 00:06:43.996 23:46:49 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:43.996 23:46:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.996 23:46:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.996 23:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 ************************************ 00:06:43.996 START TEST thread 00:06:43.996 ************************************ 00:06:43.996 23:46:49 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:43.996 * Looking for test storage... 00:06:43.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:43.997 23:46:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.997 23:46:49 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:43.997 23:46:49 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.997 23:46:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.997 ************************************ 00:06:43.997 START TEST thread_poller_perf 00:06:43.997 ************************************ 00:06:43.997 23:46:49 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.997 [2024-07-24 23:46:49.652514] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:43.997 [2024-07-24 23:46:49.652580] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664092 ] 00:06:43.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.997 [2024-07-24 23:46:49.708352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.254 [2024-07-24 23:46:49.798247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.254 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.186 ====================================== 00:06:45.186 busy:2712114832 (cyc) 00:06:45.186 total_run_count: 291000 00:06:45.186 tsc_hz: 2700000000 (cyc) 00:06:45.186 ====================================== 00:06:45.186 poller_cost: 9319 (cyc), 3451 (nsec) 00:06:45.186 00:06:45.186 real 0m1.249s 00:06:45.186 user 0m1.167s 00:06:45.186 sys 0m0.076s 00:06:45.186 23:46:50 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.186 23:46:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.186 ************************************ 00:06:45.186 END TEST thread_poller_perf 00:06:45.186 ************************************ 00:06:45.444 23:46:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.444 23:46:50 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:45.444 23:46:50 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.444 23:46:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.444 ************************************ 00:06:45.444 START TEST thread_poller_perf 00:06:45.444 ************************************ 00:06:45.444 23:46:50 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.444 [2024-07-24 23:46:50.950957] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:45.444 [2024-07-24 23:46:50.951026] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664245 ] 00:06:45.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.444 [2024-07-24 23:46:51.012748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.444 [2024-07-24 23:46:51.107693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.444 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.815 ====================================== 00:06:46.815 busy:2702630405 (cyc) 00:06:46.815 total_run_count: 3854000 00:06:46.815 tsc_hz: 2700000000 (cyc) 00:06:46.815 ====================================== 00:06:46.815 poller_cost: 701 (cyc), 259 (nsec) 00:06:46.815 00:06:46.815 real 0m1.251s 00:06:46.815 user 0m1.157s 00:06:46.815 sys 0m0.089s 00:06:46.815 23:46:52 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.815 23:46:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 ************************************ 00:06:46.815 END TEST thread_poller_perf 00:06:46.815 ************************************ 00:06:46.815 23:46:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.815 00:06:46.815 real 0m2.644s 00:06:46.815 user 0m2.381s 00:06:46.815 sys 0m0.263s 00:06:46.815 23:46:52 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.815 23:46:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 ************************************ 00:06:46.815 END TEST thread 00:06:46.815 ************************************ 00:06:46.815 23:46:52 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:46.815 23:46:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.815 23:46:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.815 23:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 ************************************ 00:06:46.816 START TEST accel 00:06:46.816 ************************************ 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:46.816 * Looking for test storage... 00:06:46.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:46.816 23:46:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:46.816 23:46:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:46.816 23:46:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.816 23:46:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=664557 00:06:46.816 23:46:52 accel -- accel/accel.sh@63 -- # waitforlisten 664557 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@827 -- # '[' -z 664557 ']' 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.816 23:46:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:46.816 23:46:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.816 23:46:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.816 23:46:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.816 23:46:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.816 23:46:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 23:46:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.816 23:46:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.816 23:46:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:46.816 23:46:52 accel -- accel/accel.sh@41 -- # jq -r . 00:06:46.816 [2024-07-24 23:46:52.354588] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:46.816 [2024-07-24 23:46:52.354688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664557 ] 00:06:46.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.816 [2024-07-24 23:46:52.413664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.816 [2024-07-24 23:46:52.500812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.075 23:46:52 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.075 23:46:52 accel -- common/autotest_common.sh@860 -- # return 0 00:06:47.075 23:46:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:47.075 23:46:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:47.075 23:46:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:47.075 23:46:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:47.075 23:46:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:47.075 23:46:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:47.075 23:46:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:47.075 23:46:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.075 23:46:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.075 23:46:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.334 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.334 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.334 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.335 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.335 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.335 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.335 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.335 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.335 23:46:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.335 23:46:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.335 23:46:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.335 23:46:52 accel -- accel/accel.sh@75 -- # killprocess 664557 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@946 -- # '[' -z 664557 ']' 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@950 -- # kill -0 664557 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@951 -- # uname 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 664557 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 664557' 00:06:47.335 killing process with pid 664557 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@965 -- # kill 664557 00:06:47.335 23:46:52 accel -- common/autotest_common.sh@970 -- # wait 664557 00:06:47.593 23:46:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:47.593 23:46:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:47.593 23:46:53 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:47.593 23:46:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.593 23:46:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.593 23:46:53 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:47.593 23:46:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:47.594 23:46:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:47.594 23:46:53 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.594 23:46:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 23:46:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:47.905 23:46:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:47.905 23:46:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.905 23:46:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 ************************************ 00:06:47.905 START TEST accel_missing_filename 00:06:47.905 ************************************ 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.905 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:47.905 23:46:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:47.905 [2024-07-24 23:46:53.368054] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:47.905 [2024-07-24 23:46:53.368185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664722 ] 00:06:47.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.905 [2024-07-24 23:46:53.428790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.905 [2024-07-24 23:46:53.524737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.905 [2024-07-24 23:46:53.585578] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.180 [2024-07-24 23:46:53.661045] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.180 A filename is required. 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.180 00:06:48.180 real 0m0.394s 00:06:48.180 user 0m0.279s 00:06:48.180 sys 0m0.149s 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.180 23:46:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 END TEST accel_missing_filename 00:06:48.180 ************************************ 00:06:48.180 23:46:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.180 23:46:53 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:48.180 23:46:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.180 23:46:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 START TEST accel_compress_verify 00:06:48.180 ************************************ 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.180 23:46:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.180 23:46:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.180 [2024-07-24 23:46:53.811857] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.180 [2024-07-24 23:46:53.811922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664760 ] 00:06:48.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.180 [2024-07-24 23:46:53.874586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.438 [2024-07-24 23:46:53.970929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.438 [2024-07-24 23:46:54.035462] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.438 [2024-07-24 23:46:54.125917] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.696 00:06:48.696 Compression does not support the verify option, aborting. 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.696 00:06:48.696 real 0m0.418s 00:06:48.696 user 0m0.300s 00:06:48.696 sys 0m0.152s 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.696 23:46:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 END TEST accel_compress_verify 00:06:48.696 ************************************ 00:06:48.696 23:46:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 START TEST accel_wrong_workload 00:06:48.696 ************************************ 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:48.696 23:46:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:48.696 Unsupported workload type: foobar 00:06:48.696 [2024-07-24 23:46:54.279687] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.696 accel_perf options: 00:06:48.696 [-h help message] 00:06:48.696 [-q queue depth per core] 00:06:48.696 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.696 [-T number of threads per core 00:06:48.696 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.696 [-t time in seconds] 00:06:48.696 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.696 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.696 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.696 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.696 [-S for crc32c workload, use this seed value (default 0) 00:06:48.696 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.696 [-f for fill workload, use this BYTE value (default 255) 00:06:48.696 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.696 [-y verify result if this switch is on] 00:06:48.696 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.696 Can be used to spread operations across a wider range of memory. 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.696 00:06:48.696 real 0m0.023s 00:06:48.696 user 0m0.014s 00:06:48.696 sys 0m0.009s 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.696 23:46:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 END TEST accel_wrong_workload 00:06:48.696 ************************************ 00:06:48.696 Error: writing output failed: Broken pipe 00:06:48.696 23:46:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.696 23:46:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 START TEST accel_negative_buffers 00:06:48.696 ************************************ 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:48.696 23:46:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:48.696 -x option must be non-negative. 00:06:48.696 [2024-07-24 23:46:54.343417] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.696 accel_perf options: 00:06:48.696 [-h help message] 00:06:48.696 [-q queue depth per core] 00:06:48.696 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.696 [-T number of threads per core 00:06:48.696 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.696 [-t time in seconds] 00:06:48.696 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.696 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.696 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.696 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.696 [-S for crc32c workload, use this seed value (default 0) 00:06:48.696 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.696 [-f for fill workload, use this BYTE value (default 255) 00:06:48.696 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.696 [-y verify result if this switch is on] 00:06:48.696 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.696 Can be used to spread operations across a wider range of memory. 00:06:48.696 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:48.697 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.697 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.697 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.697 00:06:48.697 real 0m0.020s 00:06:48.697 user 0m0.013s 00:06:48.697 sys 0m0.007s 00:06:48.697 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.697 23:46:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 END TEST accel_negative_buffers 00:06:48.697 ************************************ 00:06:48.697 Error: writing output failed: Broken pipe 00:06:48.697 23:46:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.697 23:46:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:48.697 23:46:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.697 23:46:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 START TEST accel_crc32c 00:06:48.697 ************************************ 00:06:48.697 23:46:54 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.697 23:46:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.697 [2024-07-24 23:46:54.407606] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.697 [2024-07-24 23:46:54.407671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664849 ] 00:06:48.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.955 [2024-07-24 23:46:54.472585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.955 [2024-07-24 23:46:54.568168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.955 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.956 23:46:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:50.327 23:46:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.327 00:06:50.327 real 0m1.412s 00:06:50.327 user 0m1.264s 00:06:50.327 sys 0m0.151s 00:06:50.327 23:46:55 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.327 23:46:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:50.327 ************************************ 00:06:50.327 END TEST accel_crc32c 00:06:50.327 ************************************ 00:06:50.327 23:46:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.327 23:46:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:50.327 23:46:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.327 23:46:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.327 ************************************ 00:06:50.327 START TEST accel_crc32c_C2 00:06:50.327 ************************************ 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.327 23:46:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:50.327 [2024-07-24 23:46:55.863838] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:50.327 [2024-07-24 23:46:55.863904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665098 ] 00:06:50.327 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.327 [2024-07-24 23:46:55.924524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.327 [2024-07-24 23:46:56.016650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.586 23:46:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.958 00:06:51.958 real 0m1.410s 00:06:51.958 user 0m1.266s 00:06:51.958 sys 0m0.147s 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.958 23:46:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 ************************************ 00:06:51.958 END TEST accel_crc32c_C2 00:06:51.958 ************************************ 00:06:51.958 23:46:57 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.958 23:46:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.958 23:46:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.958 23:46:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 ************************************ 00:06:51.958 START TEST accel_copy 00:06:51.958 ************************************ 00:06:51.958 23:46:57 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:51.959 [2024-07-24 23:46:57.320163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:51.959 [2024-07-24 23:46:57.320223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665254 ] 00:06:51.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.959 [2024-07-24 23:46:57.380616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.959 [2024-07-24 23:46:57.476317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.959 23:46:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.331 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.331 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.331 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.331 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.331 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:53.332 23:46:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.332 00:06:53.332 real 0m1.420s 00:06:53.332 user 0m1.277s 00:06:53.332 sys 0m0.145s 00:06:53.332 23:46:58 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.332 23:46:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.332 ************************************ 00:06:53.332 END TEST accel_copy 00:06:53.332 ************************************ 00:06:53.332 23:46:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.332 23:46:58 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:53.332 23:46:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.332 23:46:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.332 ************************************ 00:06:53.332 START TEST accel_fill 00:06:53.332 ************************************ 00:06:53.332 23:46:58 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:53.332 23:46:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:53.332 [2024-07-24 23:46:58.787493] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:53.332 [2024-07-24 23:46:58.787560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665413 ] 00:06:53.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.332 [2024-07-24 23:46:58.848782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.332 [2024-07-24 23:46:58.943886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.332 23:46:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:54.704 23:47:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.704 00:06:54.704 real 0m1.403s 00:06:54.704 user 0m1.262s 00:06:54.704 sys 0m0.144s 00:06:54.704 23:47:00 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.704 23:47:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:54.704 ************************************ 00:06:54.704 END TEST accel_fill 00:06:54.704 ************************************ 00:06:54.704 23:47:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:54.704 23:47:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:54.704 23:47:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.704 23:47:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.704 ************************************ 00:06:54.704 START TEST accel_copy_crc32c 00:06:54.704 ************************************ 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:54.704 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:54.704 [2024-07-24 23:47:00.237161] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:54.704 [2024-07-24 23:47:00.237217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665681 ] 00:06:54.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.704 [2024-07-24 23:47:00.297832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.704 [2024-07-24 23:47:00.394288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.962 23:47:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.334 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.335 00:06:56.335 real 0m1.414s 00:06:56.335 user 0m1.261s 00:06:56.335 sys 0m0.156s 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.335 23:47:01 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:56.335 ************************************ 00:06:56.335 END TEST accel_copy_crc32c 00:06:56.335 ************************************ 00:06:56.335 23:47:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.335 23:47:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:56.335 23:47:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.335 23:47:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.335 ************************************ 00:06:56.335 START TEST accel_copy_crc32c_C2 00:06:56.335 ************************************ 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:56.335 [2024-07-24 23:47:01.695923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.335 [2024-07-24 23:47:01.695987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665841 ] 00:06:56.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.335 [2024-07-24 23:47:01.758905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.335 [2024-07-24 23:47:01.854810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.335 23:47:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.709 00:06:57.709 real 0m1.422s 00:06:57.709 user 0m1.273s 00:06:57.709 sys 0m0.151s 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.709 23:47:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 ************************************ 00:06:57.709 END TEST accel_copy_crc32c_C2 00:06:57.709 ************************************ 00:06:57.709 23:47:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:57.709 23:47:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.709 23:47:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.709 23:47:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 ************************************ 00:06:57.709 START TEST accel_dualcast 00:06:57.709 ************************************ 00:06:57.709 23:47:03 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:57.709 [2024-07-24 23:47:03.164503] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:57.709 [2024-07-24 23:47:03.164565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665999 ] 00:06:57.709 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.709 [2024-07-24 23:47:03.224628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.709 [2024-07-24 23:47:03.320216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.709 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.710 23:47:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:59.082 23:47:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.082 00:06:59.082 real 0m1.416s 00:06:59.082 user 0m1.263s 00:06:59.082 sys 0m0.155s 00:06:59.082 23:47:04 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.082 23:47:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:59.082 ************************************ 00:06:59.082 END TEST accel_dualcast 00:06:59.082 ************************************ 00:06:59.082 23:47:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:59.082 23:47:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.082 23:47:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.082 23:47:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.082 ************************************ 00:06:59.082 START TEST accel_compare 00:06:59.082 ************************************ 00:06:59.082 23:47:04 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:59.082 23:47:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:59.082 [2024-07-24 23:47:04.630822] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.082 [2024-07-24 23:47:04.630886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666161 ] 00:06:59.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.082 [2024-07-24 23:47:04.694871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.082 [2024-07-24 23:47:04.792927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.341 23:47:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:00.714 23:47:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.714 00:07:00.714 real 0m1.416s 00:07:00.714 user 0m1.265s 00:07:00.714 sys 0m0.154s 00:07:00.714 23:47:06 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.714 23:47:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:00.714 ************************************ 00:07:00.714 END TEST accel_compare 00:07:00.714 ************************************ 00:07:00.714 23:47:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.714 23:47:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.714 23:47:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.714 23:47:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.714 ************************************ 00:07:00.714 START TEST accel_xor 00:07:00.714 ************************************ 00:07:00.714 23:47:06 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:00.714 [2024-07-24 23:47:06.089411] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:00.714 [2024-07-24 23:47:06.089473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666429 ] 00:07:00.714 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.714 [2024-07-24 23:47:06.150529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.714 [2024-07-24 23:47:06.242494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.714 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.715 23:47:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.087 00:07:02.087 real 0m1.401s 00:07:02.087 user 0m1.256s 00:07:02.087 sys 0m0.148s 00:07:02.087 23:47:07 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.087 23:47:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:02.087 ************************************ 00:07:02.087 END TEST accel_xor 00:07:02.087 ************************************ 00:07:02.087 23:47:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:02.087 23:47:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:02.087 23:47:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.087 23:47:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.087 ************************************ 00:07:02.087 START TEST accel_xor 00:07:02.087 ************************************ 00:07:02.087 23:47:07 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.087 [2024-07-24 23:47:07.543540] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.087 [2024-07-24 23:47:07.543605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666587 ] 00:07:02.087 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.087 [2024-07-24 23:47:07.605775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.087 [2024-07-24 23:47:07.698551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.087 23:47:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.459 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:03.460 23:47:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.460 00:07:03.460 real 0m1.418s 00:07:03.460 user 0m1.263s 00:07:03.460 sys 0m0.158s 00:07:03.460 23:47:08 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.460 23:47:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:03.460 ************************************ 00:07:03.460 END TEST accel_xor 00:07:03.460 ************************************ 00:07:03.460 23:47:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:03.460 23:47:08 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:03.460 23:47:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.460 23:47:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.460 ************************************ 00:07:03.460 START TEST accel_dif_verify 00:07:03.460 ************************************ 00:07:03.460 23:47:08 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.460 23:47:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.460 [2024-07-24 23:47:09.007975] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.460 [2024-07-24 23:47:09.008035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666744 ] 00:07:03.460 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.460 [2024-07-24 23:47:09.068074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.460 [2024-07-24 23:47:09.163473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.718 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.719 23:47:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:05.092 23:47:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.092 00:07:05.092 real 0m1.406s 00:07:05.092 user 0m1.266s 00:07:05.092 sys 0m0.144s 00:07:05.092 23:47:10 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.092 23:47:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.092 ************************************ 00:07:05.092 END TEST accel_dif_verify 00:07:05.092 ************************************ 00:07:05.092 23:47:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:05.092 23:47:10 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:05.092 23:47:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.092 23:47:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.092 ************************************ 00:07:05.092 START TEST accel_dif_generate 00:07:05.092 ************************************ 00:07:05.092 23:47:10 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:05.092 [2024-07-24 23:47:10.456128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:05.092 [2024-07-24 23:47:10.456197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667013 ] 00:07:05.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.092 [2024-07-24 23:47:10.516340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.092 [2024-07-24 23:47:10.610184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.092 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.093 23:47:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.492 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:06.493 23:47:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.493 00:07:06.493 real 0m1.404s 00:07:06.493 user 0m1.261s 00:07:06.493 sys 0m0.148s 00:07:06.493 23:47:11 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.493 23:47:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:06.493 ************************************ 00:07:06.493 END TEST accel_dif_generate 00:07:06.493 ************************************ 00:07:06.493 23:47:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:06.493 23:47:11 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:06.493 23:47:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.493 23:47:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.493 ************************************ 00:07:06.493 START TEST accel_dif_generate_copy 00:07:06.493 ************************************ 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.493 23:47:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.493 [2024-07-24 23:47:11.905324] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.493 [2024-07-24 23:47:11.905395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667179 ] 00:07:06.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.493 [2024-07-24 23:47:11.964335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.493 [2024-07-24 23:47:12.059864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.494 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.494 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.494 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.494 23:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.865 00:07:07.865 real 0m1.414s 00:07:07.865 user 0m1.268s 00:07:07.865 sys 0m0.148s 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.865 23:47:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.865 ************************************ 00:07:07.865 END TEST accel_dif_generate_copy 00:07:07.865 ************************************ 00:07:07.865 23:47:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:07.866 23:47:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.866 23:47:13 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:07.866 23:47:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.866 23:47:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.866 ************************************ 00:07:07.866 START TEST accel_comp 00:07:07.866 ************************************ 00:07:07.866 23:47:13 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:07.866 23:47:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:07.866 [2024-07-24 23:47:13.361937] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:07.866 [2024-07-24 23:47:13.362003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667335 ] 00:07:07.866 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.866 [2024-07-24 23:47:13.422994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.866 [2024-07-24 23:47:13.518810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.123 23:47:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:09.055 23:47:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.055 00:07:09.055 real 0m1.420s 00:07:09.055 user 0m1.282s 00:07:09.055 sys 0m0.142s 00:07:09.055 23:47:14 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.055 23:47:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:09.055 ************************************ 00:07:09.055 END TEST accel_comp 00:07:09.055 ************************************ 00:07:09.314 23:47:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.314 23:47:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:09.314 23:47:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.314 23:47:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.314 ************************************ 00:07:09.314 START TEST accel_decomp 00:07:09.314 ************************************ 00:07:09.314 23:47:14 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.314 23:47:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:09.314 [2024-07-24 23:47:14.835773] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.314 [2024-07-24 23:47:14.835835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667486 ] 00:07:09.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.314 [2024-07-24 23:47:14.898813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.314 [2024-07-24 23:47:14.991797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.572 23:47:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.944 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.944 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.944 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.944 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.944 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.945 23:47:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.945 00:07:10.945 real 0m1.420s 00:07:10.945 user 0m1.276s 00:07:10.945 sys 0m0.147s 00:07:10.945 23:47:16 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.945 23:47:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:10.945 ************************************ 00:07:10.945 END TEST accel_decomp 00:07:10.945 ************************************ 00:07:10.945 23:47:16 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.945 23:47:16 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:10.945 23:47:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.945 23:47:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.945 ************************************ 00:07:10.945 START TEST accel_decmop_full 00:07:10.945 ************************************ 00:07:10.945 23:47:16 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:10.945 [2024-07-24 23:47:16.297168] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:10.945 [2024-07-24 23:47:16.297225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667762 ] 00:07:10.945 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.945 [2024-07-24 23:47:16.360427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.945 [2024-07-24 23:47:16.452796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.945 23:47:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.317 23:47:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.317 00:07:12.317 real 0m1.404s 00:07:12.317 user 0m1.261s 00:07:12.317 sys 0m0.145s 00:07:12.317 23:47:17 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.317 23:47:17 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:12.317 ************************************ 00:07:12.317 END TEST accel_decmop_full 00:07:12.317 ************************************ 00:07:12.317 23:47:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.317 23:47:17 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:12.317 23:47:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.317 23:47:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.317 ************************************ 00:07:12.317 START TEST accel_decomp_mcore 00:07:12.317 ************************************ 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:12.317 [2024-07-24 23:47:17.741293] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:12.317 [2024-07-24 23:47:17.741352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667921 ] 00:07:12.317 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.317 [2024-07-24 23:47:17.801791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.317 [2024-07-24 23:47:17.901334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.317 [2024-07-24 23:47:17.901389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.317 [2024-07-24 23:47:17.901502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.317 [2024-07-24 23:47:17.901505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.317 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.318 23:47:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.689 00:07:13.689 real 0m1.422s 00:07:13.689 user 0m4.736s 00:07:13.689 sys 0m0.151s 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.689 23:47:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:13.689 ************************************ 00:07:13.689 END TEST accel_decomp_mcore 00:07:13.689 ************************************ 00:07:13.689 23:47:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.689 23:47:19 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:13.689 23:47:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.689 23:47:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.689 ************************************ 00:07:13.689 START TEST accel_decomp_full_mcore 00:07:13.689 ************************************ 00:07:13.689 23:47:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.689 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:13.689 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:13.690 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:13.690 [2024-07-24 23:47:19.210805] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:13.690 [2024-07-24 23:47:19.210868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668083 ] 00:07:13.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.690 [2024-07-24 23:47:19.270958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.690 [2024-07-24 23:47:19.370407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.690 [2024-07-24 23:47:19.370459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.690 [2024-07-24 23:47:19.370577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.690 [2024-07-24 23:47:19.370580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.948 23:47:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.319 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.320 00:07:15.320 real 0m1.430s 00:07:15.320 user 0m4.775s 00:07:15.320 sys 0m0.149s 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.320 23:47:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:15.320 ************************************ 00:07:15.320 END TEST accel_decomp_full_mcore 00:07:15.320 ************************************ 00:07:15.320 23:47:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.320 23:47:20 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:15.320 23:47:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.320 23:47:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.320 ************************************ 00:07:15.320 START TEST accel_decomp_mthread 00:07:15.320 ************************************ 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:15.320 [2024-07-24 23:47:20.684041] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.320 [2024-07-24 23:47:20.684124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668345 ] 00:07:15.320 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.320 [2024-07-24 23:47:20.744789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.320 [2024-07-24 23:47:20.840886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:15.320 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.321 23:47:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.692 00:07:16.692 real 0m1.412s 00:07:16.692 user 0m1.273s 00:07:16.692 sys 0m0.143s 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.692 23:47:22 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:16.692 ************************************ 00:07:16.692 END TEST accel_decomp_mthread 00:07:16.692 ************************************ 00:07:16.692 23:47:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.692 23:47:22 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:16.692 23:47:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.692 23:47:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.692 ************************************ 00:07:16.692 START TEST accel_decomp_full_mthread 00:07:16.692 ************************************ 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:16.692 [2024-07-24 23:47:22.139649] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:16.692 [2024-07-24 23:47:22.139720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668519 ] 00:07:16.692 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.692 [2024-07-24 23:47:22.201929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.692 [2024-07-24 23:47:22.293355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.692 23:47:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.067 00:07:18.067 real 0m1.456s 00:07:18.067 user 0m1.319s 00:07:18.067 sys 0m0.140s 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.067 23:47:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:18.067 ************************************ 00:07:18.067 END TEST accel_decomp_full_mthread 00:07:18.067 ************************************ 00:07:18.067 23:47:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:18.067 23:47:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.067 23:47:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:18.067 23:47:23 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:18.067 23:47:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.067 23:47:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.067 23:47:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.067 23:47:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.067 23:47:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.067 23:47:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.067 23:47:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.067 23:47:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:18.067 23:47:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:18.067 ************************************ 00:07:18.067 START TEST accel_dif_functional_tests 00:07:18.067 ************************************ 00:07:18.067 23:47:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.067 [2024-07-24 23:47:23.663502] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:18.067 [2024-07-24 23:47:23.663562] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668673 ] 00:07:18.067 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.067 [2024-07-24 23:47:23.724156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.325 [2024-07-24 23:47:23.820646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.325 [2024-07-24 23:47:23.820712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.325 [2024-07-24 23:47:23.820715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.325 00:07:18.325 00:07:18.325 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.325 http://cunit.sourceforge.net/ 00:07:18.325 00:07:18.325 00:07:18.325 Suite: accel_dif 00:07:18.325 Test: verify: DIF generated, GUARD check ...passed 00:07:18.325 Test: verify: DIF generated, APPTAG check ...passed 00:07:18.325 Test: verify: DIF generated, REFTAG check ...passed 00:07:18.325 Test: verify: DIF not generated, GUARD check ...[2024-07-24 23:47:23.915901] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.325 passed 00:07:18.325 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 23:47:23.915968] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.325 passed 00:07:18.325 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 23:47:23.916000] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.325 passed 00:07:18.325 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:18.325 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 23:47:23.916059] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:18.325 passed 00:07:18.325 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:18.325 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:18.325 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:18.325 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 23:47:23.916238] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:18.325 passed 00:07:18.325 Test: verify copy: DIF generated, GUARD check ...passed 00:07:18.325 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:18.325 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:18.325 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 23:47:23.916427] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.325 passed 00:07:18.325 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 23:47:23.916484] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.325 passed 00:07:18.325 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 23:47:23.916518] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.325 passed 00:07:18.325 Test: generate copy: DIF generated, GUARD check ...passed 00:07:18.325 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:18.325 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:18.325 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:18.325 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:18.325 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:18.325 Test: generate copy: iovecs-len validate ...[2024-07-24 23:47:23.916742] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:18.325 passed 00:07:18.325 Test: generate copy: buffer alignment validate ...passed 00:07:18.325 00:07:18.326 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.326 suites 1 1 n/a 0 0 00:07:18.326 tests 26 26 26 0 0 00:07:18.326 asserts 115 115 115 0 n/a 00:07:18.326 00:07:18.326 Elapsed time = 0.003 seconds 00:07:18.584 00:07:18.584 real 0m0.504s 00:07:18.584 user 0m0.778s 00:07:18.584 sys 0m0.182s 00:07:18.584 23:47:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.584 23:47:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:18.584 ************************************ 00:07:18.584 END TEST accel_dif_functional_tests 00:07:18.584 ************************************ 00:07:18.584 00:07:18.584 real 0m31.898s 00:07:18.584 user 0m35.296s 00:07:18.584 sys 0m4.617s 00:07:18.584 23:47:24 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.584 23:47:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.584 ************************************ 00:07:18.584 END TEST accel 00:07:18.584 ************************************ 00:07:18.584 23:47:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:18.584 23:47:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.584 23:47:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.584 23:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:18.584 ************************************ 00:07:18.584 START TEST accel_rpc 00:07:18.584 ************************************ 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:18.584 * Looking for test storage... 00:07:18.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:18.584 23:47:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.584 23:47:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=668855 00:07:18.584 23:47:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:18.584 23:47:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 668855 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 668855 ']' 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.584 23:47:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.842 [2024-07-24 23:47:24.304834] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:18.842 [2024-07-24 23:47:24.304905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668855 ] 00:07:18.842 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.842 [2024-07-24 23:47:24.361456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.843 [2024-07-24 23:47:24.449251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.843 23:47:24 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.843 23:47:24 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:18.843 23:47:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:18.843 23:47:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:18.843 23:47:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:18.843 23:47:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:18.843 23:47:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:18.843 23:47:24 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.843 23:47:24 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.843 23:47:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 ************************************ 00:07:18.843 START TEST accel_assign_opcode 00:07:18.843 ************************************ 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 [2024-07-24 23:47:24.541902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 [2024-07-24 23:47:24.549903] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.843 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:19.101 23:47:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:19.358 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.358 software 00:07:19.358 00:07:19.358 real 0m0.308s 00:07:19.358 user 0m0.037s 00:07:19.358 sys 0m0.009s 00:07:19.358 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.358 23:47:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.358 ************************************ 00:07:19.358 END TEST accel_assign_opcode 00:07:19.358 ************************************ 00:07:19.358 23:47:24 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 668855 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 668855 ']' 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 668855 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 668855 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 668855' 00:07:19.358 killing process with pid 668855 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@965 -- # kill 668855 00:07:19.358 23:47:24 accel_rpc -- common/autotest_common.sh@970 -- # wait 668855 00:07:19.616 00:07:19.616 real 0m1.106s 00:07:19.616 user 0m1.010s 00:07:19.616 sys 0m0.452s 00:07:19.616 23:47:25 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.616 23:47:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.616 ************************************ 00:07:19.616 END TEST accel_rpc 00:07:19.616 ************************************ 00:07:19.616 23:47:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:19.616 23:47:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.616 23:47:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.616 23:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:19.874 ************************************ 00:07:19.874 START TEST app_cmdline 00:07:19.874 ************************************ 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:19.874 * Looking for test storage... 00:07:19.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:19.874 23:47:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:19.874 23:47:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=669063 00:07:19.874 23:47:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:19.874 23:47:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 669063 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 669063 ']' 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.874 23:47:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.874 [2024-07-24 23:47:25.455660] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.874 [2024-07-24 23:47:25.455737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669063 ] 00:07:19.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.874 [2024-07-24 23:47:25.511798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.131 [2024-07-24 23:47:25.599685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.388 23:47:25 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.388 23:47:25 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:20.388 23:47:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:20.388 { 00:07:20.388 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:20.388 "fields": { 00:07:20.388 "major": 24, 00:07:20.388 "minor": 5, 00:07:20.388 "patch": 1, 00:07:20.388 "suffix": "-pre", 00:07:20.388 "commit": "241d0f3c9" 00:07:20.388 } 00:07:20.388 } 00:07:20.388 23:47:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:20.388 23:47:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:20.388 23:47:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:20.388 23:47:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:20.645 23:47:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:20.645 23:47:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.902 request: 00:07:20.902 { 00:07:20.902 "method": "env_dpdk_get_mem_stats", 00:07:20.902 "req_id": 1 00:07:20.902 } 00:07:20.902 Got JSON-RPC error response 00:07:20.902 response: 00:07:20.902 { 00:07:20.903 "code": -32601, 00:07:20.903 "message": "Method not found" 00:07:20.903 } 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.903 23:47:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 669063 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 669063 ']' 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 669063 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 669063 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 669063' 00:07:20.903 killing process with pid 669063 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@965 -- # kill 669063 00:07:20.903 23:47:26 app_cmdline -- common/autotest_common.sh@970 -- # wait 669063 00:07:21.160 00:07:21.160 real 0m1.469s 00:07:21.160 user 0m1.771s 00:07:21.160 sys 0m0.470s 00:07:21.160 23:47:26 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.160 23:47:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.160 ************************************ 00:07:21.160 END TEST app_cmdline 00:07:21.160 ************************************ 00:07:21.160 23:47:26 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:21.160 23:47:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.160 23:47:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.160 23:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:21.160 ************************************ 00:07:21.160 START TEST version 00:07:21.160 ************************************ 00:07:21.160 23:47:26 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:21.419 * Looking for test storage... 00:07:21.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:21.419 23:47:26 version -- app/version.sh@17 -- # get_header_version major 00:07:21.419 23:47:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # cut -f2 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.419 23:47:26 version -- app/version.sh@17 -- # major=24 00:07:21.419 23:47:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.419 23:47:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # cut -f2 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.419 23:47:26 version -- app/version.sh@18 -- # minor=5 00:07:21.419 23:47:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.419 23:47:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # cut -f2 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.419 23:47:26 version -- app/version.sh@19 -- # patch=1 00:07:21.419 23:47:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.419 23:47:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # cut -f2 00:07:21.419 23:47:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.419 23:47:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.419 23:47:26 version -- app/version.sh@22 -- # version=24.5 00:07:21.419 23:47:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.419 23:47:26 version -- app/version.sh@25 -- # version=24.5.1 00:07:21.419 23:47:26 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:21.419 23:47:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:21.419 23:47:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.419 23:47:26 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:21.419 23:47:26 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:21.419 00:07:21.419 real 0m0.102s 00:07:21.419 user 0m0.054s 00:07:21.419 sys 0m0.070s 00:07:21.419 23:47:26 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.419 23:47:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.419 ************************************ 00:07:21.419 END TEST version 00:07:21.419 ************************************ 00:07:21.419 23:47:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:21.419 23:47:26 -- spdk/autotest.sh@198 -- # uname -s 00:07:21.419 23:47:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:21.419 23:47:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.419 23:47:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.419 23:47:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:21.419 23:47:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:21.419 23:47:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:21.419 23:47:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.419 23:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:21.419 23:47:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.419 23:47:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.419 23:47:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.419 23:47:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.419 23:47:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.419 23:47:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.419 23:47:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.419 23:47:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.419 23:47:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.419 23:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.419 ************************************ 00:07:21.419 START TEST nvmf_tcp 00:07:21.419 ************************************ 00:07:21.419 23:47:27 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.419 * Looking for test storage... 00:07:21.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.419 23:47:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.419 23:47:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.419 23:47:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.419 23:47:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.419 23:47:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.419 23:47:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.419 23:47:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.419 23:47:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.420 23:47:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:21.420 23:47:27 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:21.420 23:47:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:21.420 23:47:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.420 23:47:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.420 23:47:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.420 23:47:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.678 ************************************ 00:07:21.678 START TEST nvmf_example 00:07:21.678 ************************************ 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.678 * Looking for test storage... 00:07:21.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.678 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.679 23:47:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.578 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:23.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:23.579 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:23.579 Found net devices under 0000:09:00.0: cvl_0_0 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:23.579 Found net devices under 0000:09:00.1: cvl_0_1 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:23.579 00:07:23.579 --- 10.0.0.2 ping statistics --- 00:07:23.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.579 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:07:23.579 00:07:23.579 --- 10.0.0.1 ping statistics --- 00:07:23.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.579 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=670967 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 670967 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 670967 ']' 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.579 23:47:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.579 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.512 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.512 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:24.512 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:24.512 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.512 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:24.769 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:24.770 23:47:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:24.770 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.805 Initializing NVMe Controllers 00:07:34.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:34.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:34.805 Initialization complete. Launching workers. 00:07:34.805 ======================================================== 00:07:34.805 Latency(us) 00:07:34.805 Device Information : IOPS MiB/s Average min max 00:07:34.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14794.95 57.79 4325.27 780.24 15338.93 00:07:34.805 ======================================================== 00:07:34.805 Total : 14794.95 57.79 4325.27 780.24 15338.93 00:07:34.805 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.805 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.805 rmmod nvme_tcp 00:07:34.805 rmmod nvme_fabrics 00:07:35.064 rmmod nvme_keyring 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 670967 ']' 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 670967 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 670967 ']' 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 670967 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 670967 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 670967' 00:07:35.064 killing process with pid 670967 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 670967 00:07:35.064 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 670967 00:07:35.064 nvmf threads initialize successfully 00:07:35.064 bdev subsystem init successfully 00:07:35.064 created a nvmf target service 00:07:35.064 create targets's poll groups done 00:07:35.064 all subsystems of target started 00:07:35.064 nvmf target is running 00:07:35.064 all subsystems of target stopped 00:07:35.064 destroy targets's poll groups done 00:07:35.064 destroyed the nvmf target service 00:07:35.064 bdev subsystem finish successfully 00:07:35.064 nvmf threads destroy successfully 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.322 23:47:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.227 00:07:37.227 real 0m15.716s 00:07:37.227 user 0m44.111s 00:07:37.227 sys 0m3.591s 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.227 23:47:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.227 ************************************ 00:07:37.227 END TEST nvmf_example 00:07:37.227 ************************************ 00:07:37.227 23:47:42 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.227 23:47:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.227 23:47:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.227 23:47:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.227 ************************************ 00:07:37.227 START TEST nvmf_filesystem 00:07:37.227 ************************************ 00:07:37.227 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.488 * Looking for test storage... 00:07:37.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.488 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:37.489 #define SPDK_CONFIG_H 00:07:37.489 #define SPDK_CONFIG_APPS 1 00:07:37.489 #define SPDK_CONFIG_ARCH native 00:07:37.489 #undef SPDK_CONFIG_ASAN 00:07:37.489 #undef SPDK_CONFIG_AVAHI 00:07:37.489 #undef SPDK_CONFIG_CET 00:07:37.489 #define SPDK_CONFIG_COVERAGE 1 00:07:37.489 #define SPDK_CONFIG_CROSS_PREFIX 00:07:37.489 #undef SPDK_CONFIG_CRYPTO 00:07:37.489 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:37.489 #undef SPDK_CONFIG_CUSTOMOCF 00:07:37.489 #undef SPDK_CONFIG_DAOS 00:07:37.489 #define SPDK_CONFIG_DAOS_DIR 00:07:37.489 #define SPDK_CONFIG_DEBUG 1 00:07:37.489 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:37.489 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.489 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:37.489 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.489 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:37.489 #undef SPDK_CONFIG_DPDK_UADK 00:07:37.489 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.489 #define SPDK_CONFIG_EXAMPLES 1 00:07:37.489 #undef SPDK_CONFIG_FC 00:07:37.489 #define SPDK_CONFIG_FC_PATH 00:07:37.489 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:37.489 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:37.489 #undef SPDK_CONFIG_FUSE 00:07:37.489 #undef SPDK_CONFIG_FUZZER 00:07:37.489 #define SPDK_CONFIG_FUZZER_LIB 00:07:37.489 #undef SPDK_CONFIG_GOLANG 00:07:37.489 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:37.489 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:37.489 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:37.489 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:37.489 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:37.489 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:37.489 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:37.489 #define SPDK_CONFIG_IDXD 1 00:07:37.489 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:37.489 #undef SPDK_CONFIG_IPSEC_MB 00:07:37.489 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:37.489 #define SPDK_CONFIG_ISAL 1 00:07:37.489 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:37.489 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:37.489 #define SPDK_CONFIG_LIBDIR 00:07:37.489 #undef SPDK_CONFIG_LTO 00:07:37.489 #define SPDK_CONFIG_MAX_LCORES 00:07:37.489 #define SPDK_CONFIG_NVME_CUSE 1 00:07:37.489 #undef SPDK_CONFIG_OCF 00:07:37.489 #define SPDK_CONFIG_OCF_PATH 00:07:37.489 #define SPDK_CONFIG_OPENSSL_PATH 00:07:37.489 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:37.489 #define SPDK_CONFIG_PGO_DIR 00:07:37.489 #undef SPDK_CONFIG_PGO_USE 00:07:37.489 #define SPDK_CONFIG_PREFIX /usr/local 00:07:37.489 #undef SPDK_CONFIG_RAID5F 00:07:37.489 #undef SPDK_CONFIG_RBD 00:07:37.489 #define SPDK_CONFIG_RDMA 1 00:07:37.489 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:37.489 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:37.489 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:37.489 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:37.489 #define SPDK_CONFIG_SHARED 1 00:07:37.489 #undef SPDK_CONFIG_SMA 00:07:37.489 #define SPDK_CONFIG_TESTS 1 00:07:37.489 #undef SPDK_CONFIG_TSAN 00:07:37.489 #define SPDK_CONFIG_UBLK 1 00:07:37.489 #define SPDK_CONFIG_UBSAN 1 00:07:37.489 #undef SPDK_CONFIG_UNIT_TESTS 00:07:37.489 #undef SPDK_CONFIG_URING 00:07:37.489 #define SPDK_CONFIG_URING_PATH 00:07:37.489 #undef SPDK_CONFIG_URING_ZNS 00:07:37.489 #undef SPDK_CONFIG_USDT 00:07:37.489 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:37.489 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:37.489 #define SPDK_CONFIG_VFIO_USER 1 00:07:37.489 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:37.489 #define SPDK_CONFIG_VHOST 1 00:07:37.489 #define SPDK_CONFIG_VIRTIO 1 00:07:37.489 #undef SPDK_CONFIG_VTUNE 00:07:37.489 #define SPDK_CONFIG_VTUNE_DIR 00:07:37.489 #define SPDK_CONFIG_WERROR 1 00:07:37.489 #define SPDK_CONFIG_WPDK_DIR 00:07:37.489 #undef SPDK_CONFIG_XNVME 00:07:37.489 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.489 23:47:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:37.490 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:37.491 23:47:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:37.491 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 672676 ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 672676 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.HoveWt 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HoveWt/tests/target /tmp/spdk.HoveWt 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=952066048 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4332363776 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=49241010176 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12753698816 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30992642048 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390178816 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8765440 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996164608 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1191936 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:37.492 * Looking for test storage... 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=49241010176 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=14968291328 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.492 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.493 23:47:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.392 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:39.393 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:39.393 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:39.393 Found net devices under 0000:09:00.0: cvl_0_0 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:39.393 Found net devices under 0000:09:00.1: cvl_0_1 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.393 23:47:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:07:39.393 00:07:39.393 --- 10.0.0.2 ping statistics --- 00:07:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.393 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:07:39.393 00:07:39.393 --- 10.0.0.1 ping statistics --- 00:07:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.393 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.393 23:47:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.652 ************************************ 00:07:39.652 START TEST nvmf_filesystem_no_in_capsule 00:07:39.652 ************************************ 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=674294 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 674294 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 674294 ']' 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.652 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.652 [2024-07-24 23:47:45.194776] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:39.652 [2024-07-24 23:47:45.194875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.652 [2024-07-24 23:47:45.262689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.652 [2024-07-24 23:47:45.360666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.652 [2024-07-24 23:47:45.360716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.652 [2024-07-24 23:47:45.360737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.652 [2024-07-24 23:47:45.360749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.652 [2024-07-24 23:47:45.360759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.652 [2024-07-24 23:47:45.360844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.652 [2024-07-24 23:47:45.360907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.652 [2024-07-24 23:47:45.360998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.652 [2024-07-24 23:47:45.361000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.910 [2024-07-24 23:47:45.521024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.910 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 Malloc1 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 [2024-07-24 23:47:45.708896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:40.168 { 00:07:40.168 "name": "Malloc1", 00:07:40.168 "aliases": [ 00:07:40.168 "8f318862-9c54-484e-857d-b61297e1e471" 00:07:40.168 ], 00:07:40.168 "product_name": "Malloc disk", 00:07:40.168 "block_size": 512, 00:07:40.168 "num_blocks": 1048576, 00:07:40.168 "uuid": "8f318862-9c54-484e-857d-b61297e1e471", 00:07:40.168 "assigned_rate_limits": { 00:07:40.168 "rw_ios_per_sec": 0, 00:07:40.168 "rw_mbytes_per_sec": 0, 00:07:40.168 "r_mbytes_per_sec": 0, 00:07:40.168 "w_mbytes_per_sec": 0 00:07:40.168 }, 00:07:40.168 "claimed": true, 00:07:40.168 "claim_type": "exclusive_write", 00:07:40.168 "zoned": false, 00:07:40.168 "supported_io_types": { 00:07:40.168 "read": true, 00:07:40.168 "write": true, 00:07:40.168 "unmap": true, 00:07:40.168 "write_zeroes": true, 00:07:40.168 "flush": true, 00:07:40.168 "reset": true, 00:07:40.168 "compare": false, 00:07:40.168 "compare_and_write": false, 00:07:40.168 "abort": true, 00:07:40.168 "nvme_admin": false, 00:07:40.168 "nvme_io": false 00:07:40.168 }, 00:07:40.168 "memory_domains": [ 00:07:40.168 { 00:07:40.168 "dma_device_id": "system", 00:07:40.168 "dma_device_type": 1 00:07:40.168 }, 00:07:40.168 { 00:07:40.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.168 "dma_device_type": 2 00:07:40.168 } 00:07:40.168 ], 00:07:40.168 "driver_specific": {} 00:07:40.168 } 00:07:40.168 ]' 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.168 23:47:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.732 23:47:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.732 23:47:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:40.732 23:47:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.732 23:47:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:40.732 23:47:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:43.255 23:47:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:43.512 23:47:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:44.444 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:44.444 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:44.444 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:44.444 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.444 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.702 ************************************ 00:07:44.702 START TEST filesystem_ext4 00:07:44.702 ************************************ 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:44.702 mke2fs 1.46.5 (30-Dec-2021) 00:07:44.702 Discarding device blocks: 0/522240 done 00:07:44.702 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:44.702 Filesystem UUID: c766a34f-425f-4838-adb9-04391d4ab23d 00:07:44.702 Superblock backups stored on blocks: 00:07:44.702 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:44.702 00:07:44.702 Allocating group tables: 0/64 done 00:07:44.702 Writing inode tables: 0/64 done 00:07:44.702 Creating journal (8192 blocks): done 00:07:44.702 Writing superblocks and filesystem accounting information: 0/64 done 00:07:44.702 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:44.702 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 674294 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.959 00:07:44.959 real 0m0.496s 00:07:44.959 user 0m0.018s 00:07:44.959 sys 0m0.051s 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.959 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:44.959 ************************************ 00:07:44.959 END TEST filesystem_ext4 00:07:44.959 ************************************ 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.218 ************************************ 00:07:45.218 START TEST filesystem_btrfs 00:07:45.218 ************************************ 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:45.218 23:47:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:45.475 btrfs-progs v6.6.2 00:07:45.475 See https://btrfs.readthedocs.io for more information. 00:07:45.475 00:07:45.475 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:45.475 NOTE: several default settings have changed in version 5.15, please make sure 00:07:45.475 this does not affect your deployments: 00:07:45.475 - DUP for metadata (-m dup) 00:07:45.475 - enabled no-holes (-O no-holes) 00:07:45.476 - enabled free-space-tree (-R free-space-tree) 00:07:45.476 00:07:45.476 Label: (null) 00:07:45.476 UUID: 42e8051c-ff80-4feb-bb65-c942a7d387ce 00:07:45.476 Node size: 16384 00:07:45.476 Sector size: 4096 00:07:45.476 Filesystem size: 510.00MiB 00:07:45.476 Block group profiles: 00:07:45.476 Data: single 8.00MiB 00:07:45.476 Metadata: DUP 32.00MiB 00:07:45.476 System: DUP 8.00MiB 00:07:45.476 SSD detected: yes 00:07:45.476 Zoned device: no 00:07:45.476 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:45.476 Runtime features: free-space-tree 00:07:45.476 Checksum: crc32c 00:07:45.476 Number of devices: 1 00:07:45.476 Devices: 00:07:45.476 ID SIZE PATH 00:07:45.476 1 510.00MiB /dev/nvme0n1p1 00:07:45.476 00:07:45.476 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:45.476 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:46.039 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 674294 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.297 00:07:46.297 real 0m1.052s 00:07:46.297 user 0m0.008s 00:07:46.297 sys 0m0.128s 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:46.297 ************************************ 00:07:46.297 END TEST filesystem_btrfs 00:07:46.297 ************************************ 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.297 ************************************ 00:07:46.297 START TEST filesystem_xfs 00:07:46.297 ************************************ 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:46.297 23:47:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:46.297 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:46.297 = sectsz=512 attr=2, projid32bit=1 00:07:46.297 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:46.297 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:46.297 data = bsize=4096 blocks=130560, imaxpct=25 00:07:46.297 = sunit=0 swidth=0 blks 00:07:46.297 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:46.297 log =internal log bsize=4096 blocks=16384, version=2 00:07:46.297 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:46.297 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:47.228 Discarding blocks...Done. 00:07:47.228 23:47:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:47.228 23:47:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 674294 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.751 00:07:49.751 real 0m3.172s 00:07:49.751 user 0m0.022s 00:07:49.751 sys 0m0.058s 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.751 23:47:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:49.751 ************************************ 00:07:49.751 END TEST filesystem_xfs 00:07:49.751 ************************************ 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:49.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 674294 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 674294 ']' 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 674294 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 674294 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 674294' 00:07:49.751 killing process with pid 674294 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 674294 00:07:49.751 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 674294 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:50.316 00:07:50.316 real 0m10.726s 00:07:50.316 user 0m41.058s 00:07:50.316 sys 0m1.632s 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 ************************************ 00:07:50.316 END TEST nvmf_filesystem_no_in_capsule 00:07:50.316 ************************************ 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 ************************************ 00:07:50.316 START TEST nvmf_filesystem_in_capsule 00:07:50.316 ************************************ 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=675775 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 675775 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 675775 ']' 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.316 23:47:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 [2024-07-24 23:47:55.965686] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:50.316 [2024-07-24 23:47:55.965760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.574 [2024-07-24 23:47:56.035397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.574 [2024-07-24 23:47:56.123868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.574 [2024-07-24 23:47:56.123939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.574 [2024-07-24 23:47:56.123952] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.574 [2024-07-24 23:47:56.123963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.574 [2024-07-24 23:47:56.123972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.574 [2024-07-24 23:47:56.124048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.574 [2024-07-24 23:47:56.124128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.574 [2024-07-24 23:47:56.124131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.574 [2024-07-24 23:47:56.124072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.574 [2024-07-24 23:47:56.277944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.574 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.831 Malloc1 00:07:50.831 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.831 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.831 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.831 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.831 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.832 [2024-07-24 23:47:56.467595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:50.832 { 00:07:50.832 "name": "Malloc1", 00:07:50.832 "aliases": [ 00:07:50.832 "1a8736fa-b313-4e42-8650-b16facac970d" 00:07:50.832 ], 00:07:50.832 "product_name": "Malloc disk", 00:07:50.832 "block_size": 512, 00:07:50.832 "num_blocks": 1048576, 00:07:50.832 "uuid": "1a8736fa-b313-4e42-8650-b16facac970d", 00:07:50.832 "assigned_rate_limits": { 00:07:50.832 "rw_ios_per_sec": 0, 00:07:50.832 "rw_mbytes_per_sec": 0, 00:07:50.832 "r_mbytes_per_sec": 0, 00:07:50.832 "w_mbytes_per_sec": 0 00:07:50.832 }, 00:07:50.832 "claimed": true, 00:07:50.832 "claim_type": "exclusive_write", 00:07:50.832 "zoned": false, 00:07:50.832 "supported_io_types": { 00:07:50.832 "read": true, 00:07:50.832 "write": true, 00:07:50.832 "unmap": true, 00:07:50.832 "write_zeroes": true, 00:07:50.832 "flush": true, 00:07:50.832 "reset": true, 00:07:50.832 "compare": false, 00:07:50.832 "compare_and_write": false, 00:07:50.832 "abort": true, 00:07:50.832 "nvme_admin": false, 00:07:50.832 "nvme_io": false 00:07:50.832 }, 00:07:50.832 "memory_domains": [ 00:07:50.832 { 00:07:50.832 "dma_device_id": "system", 00:07:50.832 "dma_device_type": 1 00:07:50.832 }, 00:07:50.832 { 00:07:50.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.832 "dma_device_type": 2 00:07:50.832 } 00:07:50.832 ], 00:07:50.832 "driver_specific": {} 00:07:50.832 } 00:07:50.832 ]' 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:50.832 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:51.089 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:51.089 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:51.089 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:51.089 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:51.089 23:47:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:51.653 23:47:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.653 23:47:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:51.653 23:47:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.653 23:47:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:51.653 23:47:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:53.548 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:53.548 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:53.548 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:53.805 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:54.370 23:47:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.319 ************************************ 00:07:55.319 START TEST filesystem_in_capsule_ext4 00:07:55.319 ************************************ 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:55.319 23:48:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:55.319 mke2fs 1.46.5 (30-Dec-2021) 00:07:55.594 Discarding device blocks: 0/522240 done 00:07:55.594 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:55.594 Filesystem UUID: 2fd16a38-a306-4374-b19f-299f0abe2c64 00:07:55.594 Superblock backups stored on blocks: 00:07:55.594 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:55.594 00:07:55.594 Allocating group tables: 0/64 done 00:07:55.594 Writing inode tables: 0/64 done 00:07:58.116 Creating journal (8192 blocks): done 00:07:58.116 Writing superblocks and filesystem accounting information: 0/64 done 00:07:58.116 00:07:58.116 23:48:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:58.116 23:48:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 675775 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.681 00:07:58.681 real 0m3.262s 00:07:58.681 user 0m0.016s 00:07:58.681 sys 0m0.057s 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:58.681 ************************************ 00:07:58.681 END TEST filesystem_in_capsule_ext4 00:07:58.681 ************************************ 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.681 ************************************ 00:07:58.681 START TEST filesystem_in_capsule_btrfs 00:07:58.681 ************************************ 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:58.681 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.939 btrfs-progs v6.6.2 00:07:58.939 See https://btrfs.readthedocs.io for more information. 00:07:58.939 00:07:58.939 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.939 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.939 this does not affect your deployments: 00:07:58.939 - DUP for metadata (-m dup) 00:07:58.939 - enabled no-holes (-O no-holes) 00:07:58.939 - enabled free-space-tree (-R free-space-tree) 00:07:58.939 00:07:58.939 Label: (null) 00:07:58.939 UUID: ac2ae0e7-41cc-4c62-ba0b-15fb0ea3ed23 00:07:58.939 Node size: 16384 00:07:58.939 Sector size: 4096 00:07:58.939 Filesystem size: 510.00MiB 00:07:58.939 Block group profiles: 00:07:58.939 Data: single 8.00MiB 00:07:58.939 Metadata: DUP 32.00MiB 00:07:58.939 System: DUP 8.00MiB 00:07:58.939 SSD detected: yes 00:07:58.939 Zoned device: no 00:07:58.939 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.939 Runtime features: free-space-tree 00:07:58.939 Checksum: crc32c 00:07:58.939 Number of devices: 1 00:07:58.939 Devices: 00:07:58.939 ID SIZE PATH 00:07:58.939 1 510.00MiB /dev/nvme0n1p1 00:07:58.939 00:07:58.939 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.939 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:59.505 23:48:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 675775 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.505 00:07:59.505 real 0m0.790s 00:07:59.505 user 0m0.024s 00:07:59.505 sys 0m0.112s 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.505 ************************************ 00:07:59.505 END TEST filesystem_in_capsule_btrfs 00:07:59.505 ************************************ 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.505 ************************************ 00:07:59.505 START TEST filesystem_in_capsule_xfs 00:07:59.505 ************************************ 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:59.505 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:59.505 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:59.505 = sectsz=512 attr=2, projid32bit=1 00:07:59.506 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:59.506 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:59.506 data = bsize=4096 blocks=130560, imaxpct=25 00:07:59.506 = sunit=0 swidth=0 blks 00:07:59.506 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:59.506 log =internal log bsize=4096 blocks=16384, version=2 00:07:59.506 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:59.506 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:00.439 Discarding blocks...Done. 00:08:00.439 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:00.439 23:48:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 675775 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.335 00:08:02.335 real 0m2.708s 00:08:02.335 user 0m0.014s 00:08:02.335 sys 0m0.060s 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 ************************************ 00:08:02.335 END TEST filesystem_in_capsule_xfs 00:08:02.335 ************************************ 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.335 23:48:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 675775 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 675775 ']' 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 675775 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 675775 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 675775' 00:08:02.335 killing process with pid 675775 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 675775 00:08:02.335 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 675775 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.900 00:08:02.900 real 0m12.551s 00:08:02.900 user 0m48.219s 00:08:02.900 sys 0m1.814s 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.900 ************************************ 00:08:02.900 END TEST nvmf_filesystem_in_capsule 00:08:02.900 ************************************ 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.900 rmmod nvme_tcp 00:08:02.900 rmmod nvme_fabrics 00:08:02.900 rmmod nvme_keyring 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.900 23:48:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.434 23:48:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.434 00:08:05.434 real 0m27.690s 00:08:05.434 user 1m30.129s 00:08:05.434 sys 0m5.004s 00:08:05.434 23:48:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.434 23:48:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.434 ************************************ 00:08:05.434 END TEST nvmf_filesystem 00:08:05.434 ************************************ 00:08:05.434 23:48:10 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.434 23:48:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:05.434 23:48:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.434 23:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.434 ************************************ 00:08:05.434 START TEST nvmf_target_discovery 00:08:05.434 ************************************ 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.434 * Looking for test storage... 00:08:05.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.434 23:48:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:07.337 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:07.337 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.337 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:07.338 Found net devices under 0000:09:00.0: cvl_0_0 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:07.338 Found net devices under 0000:09:00.1: cvl_0_1 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:08:07.338 00:08:07.338 --- 10.0.0.2 ping statistics --- 00:08:07.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.338 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:07.338 00:08:07.338 --- 10.0.0.1 ping statistics --- 00:08:07.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.338 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=679956 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 679956 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 679956 ']' 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:07.338 23:48:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.338 [2024-07-24 23:48:12.869299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:07.338 [2024-07-24 23:48:12.869392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.338 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.338 [2024-07-24 23:48:12.935142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.338 [2024-07-24 23:48:13.027648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.338 [2024-07-24 23:48:13.027693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.338 [2024-07-24 23:48:13.027714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.338 [2024-07-24 23:48:13.027724] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.338 [2024-07-24 23:48:13.027734] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.338 [2024-07-24 23:48:13.027815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.338 [2024-07-24 23:48:13.027880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.338 [2024-07-24 23:48:13.027949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.338 [2024-07-24 23:48:13.027951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 [2024-07-24 23:48:13.184042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 Null1 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 [2024-07-24 23:48:13.224383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 Null2 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 Null3 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.597 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.598 Null4 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.598 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:08:07.858 00:08:07.858 Discovery Log Number of Records 6, Generation counter 6 00:08:07.858 =====Discovery Log Entry 0====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: current discovery subsystem 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4420 00:08:07.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: explicit discovery connections, duplicate discovery information 00:08:07.858 sectype: none 00:08:07.858 =====Discovery Log Entry 1====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: nvme subsystem 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4420 00:08:07.858 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: none 00:08:07.858 sectype: none 00:08:07.858 =====Discovery Log Entry 2====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: nvme subsystem 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4420 00:08:07.858 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: none 00:08:07.858 sectype: none 00:08:07.858 =====Discovery Log Entry 3====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: nvme subsystem 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4420 00:08:07.858 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: none 00:08:07.858 sectype: none 00:08:07.858 =====Discovery Log Entry 4====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: nvme subsystem 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4420 00:08:07.858 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: none 00:08:07.858 sectype: none 00:08:07.858 =====Discovery Log Entry 5====== 00:08:07.858 trtype: tcp 00:08:07.858 adrfam: ipv4 00:08:07.858 subtype: discovery subsystem referral 00:08:07.858 treq: not required 00:08:07.858 portid: 0 00:08:07.858 trsvcid: 4430 00:08:07.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.858 traddr: 10.0.0.2 00:08:07.858 eflags: none 00:08:07.858 sectype: none 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.858 Perform nvmf subsystem discovery via RPC 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.858 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.858 [ 00:08:07.858 { 00:08:07.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.858 "subtype": "Discovery", 00:08:07.858 "listen_addresses": [ 00:08:07.858 { 00:08:07.858 "trtype": "TCP", 00:08:07.858 "adrfam": "IPv4", 00:08:07.858 "traddr": "10.0.0.2", 00:08:07.858 "trsvcid": "4420" 00:08:07.858 } 00:08:07.858 ], 00:08:07.858 "allow_any_host": true, 00:08:07.858 "hosts": [] 00:08:07.858 }, 00:08:07.858 { 00:08:07.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.858 "subtype": "NVMe", 00:08:07.858 "listen_addresses": [ 00:08:07.858 { 00:08:07.858 "trtype": "TCP", 00:08:07.858 "adrfam": "IPv4", 00:08:07.858 "traddr": "10.0.0.2", 00:08:07.858 "trsvcid": "4420" 00:08:07.858 } 00:08:07.858 ], 00:08:07.858 "allow_any_host": true, 00:08:07.858 "hosts": [], 00:08:07.858 "serial_number": "SPDK00000000000001", 00:08:07.858 "model_number": "SPDK bdev Controller", 00:08:07.858 "max_namespaces": 32, 00:08:07.858 "min_cntlid": 1, 00:08:07.858 "max_cntlid": 65519, 00:08:07.858 "namespaces": [ 00:08:07.858 { 00:08:07.858 "nsid": 1, 00:08:07.858 "bdev_name": "Null1", 00:08:07.858 "name": "Null1", 00:08:07.858 "nguid": "96DE6DE7C92749C2845628509D5D0950", 00:08:07.858 "uuid": "96de6de7-c927-49c2-8456-28509d5d0950" 00:08:07.858 } 00:08:07.858 ] 00:08:07.858 }, 00:08:07.858 { 00:08:07.858 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.858 "subtype": "NVMe", 00:08:07.858 "listen_addresses": [ 00:08:07.858 { 00:08:07.858 "trtype": "TCP", 00:08:07.858 "adrfam": "IPv4", 00:08:07.858 "traddr": "10.0.0.2", 00:08:07.858 "trsvcid": "4420" 00:08:07.858 } 00:08:07.858 ], 00:08:07.858 "allow_any_host": true, 00:08:07.858 "hosts": [], 00:08:07.858 "serial_number": "SPDK00000000000002", 00:08:07.858 "model_number": "SPDK bdev Controller", 00:08:07.858 "max_namespaces": 32, 00:08:07.858 "min_cntlid": 1, 00:08:07.858 "max_cntlid": 65519, 00:08:07.858 "namespaces": [ 00:08:07.858 { 00:08:07.858 "nsid": 1, 00:08:07.858 "bdev_name": "Null2", 00:08:07.859 "name": "Null2", 00:08:07.859 "nguid": "78FD29339CB84E14B6D188990001DC9E", 00:08:07.859 "uuid": "78fd2933-9cb8-4e14-b6d1-88990001dc9e" 00:08:07.859 } 00:08:07.859 ] 00:08:07.859 }, 00:08:07.859 { 00:08:07.859 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.859 "subtype": "NVMe", 00:08:07.859 "listen_addresses": [ 00:08:07.859 { 00:08:07.859 "trtype": "TCP", 00:08:07.859 "adrfam": "IPv4", 00:08:07.859 "traddr": "10.0.0.2", 00:08:07.859 "trsvcid": "4420" 00:08:07.859 } 00:08:07.859 ], 00:08:07.859 "allow_any_host": true, 00:08:07.859 "hosts": [], 00:08:07.859 "serial_number": "SPDK00000000000003", 00:08:07.859 "model_number": "SPDK bdev Controller", 00:08:07.859 "max_namespaces": 32, 00:08:07.859 "min_cntlid": 1, 00:08:07.859 "max_cntlid": 65519, 00:08:07.859 "namespaces": [ 00:08:07.859 { 00:08:07.859 "nsid": 1, 00:08:07.859 "bdev_name": "Null3", 00:08:07.859 "name": "Null3", 00:08:07.859 "nguid": "9A4F11CD61B04854959C77D302A33839", 00:08:07.859 "uuid": "9a4f11cd-61b0-4854-959c-77d302a33839" 00:08:07.859 } 00:08:07.859 ] 00:08:07.859 }, 00:08:07.859 { 00:08:07.859 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.859 "subtype": "NVMe", 00:08:07.859 "listen_addresses": [ 00:08:07.859 { 00:08:07.859 "trtype": "TCP", 00:08:07.859 "adrfam": "IPv4", 00:08:07.859 "traddr": "10.0.0.2", 00:08:07.859 "trsvcid": "4420" 00:08:07.859 } 00:08:07.859 ], 00:08:07.859 "allow_any_host": true, 00:08:07.859 "hosts": [], 00:08:07.859 "serial_number": "SPDK00000000000004", 00:08:07.859 "model_number": "SPDK bdev Controller", 00:08:07.859 "max_namespaces": 32, 00:08:07.859 "min_cntlid": 1, 00:08:07.859 "max_cntlid": 65519, 00:08:07.859 "namespaces": [ 00:08:07.859 { 00:08:07.859 "nsid": 1, 00:08:07.859 "bdev_name": "Null4", 00:08:07.859 "name": "Null4", 00:08:07.859 "nguid": "3381B8A899024D9A9F41D697D99B5F9A", 00:08:07.859 "uuid": "3381b8a8-9902-4d9a-9f41-d697d99b5f9a" 00:08:07.859 } 00:08:07.859 ] 00:08:07.859 } 00:08:07.859 ] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.859 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.116 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.116 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.117 rmmod nvme_tcp 00:08:08.117 rmmod nvme_fabrics 00:08:08.117 rmmod nvme_keyring 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 679956 ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 679956 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 679956 ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 679956 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 679956 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 679956' 00:08:08.117 killing process with pid 679956 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 679956 00:08:08.117 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 679956 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.374 23:48:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.903 23:48:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.903 00:08:10.903 real 0m5.364s 00:08:10.903 user 0m4.448s 00:08:10.903 sys 0m1.794s 00:08:10.903 23:48:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.903 23:48:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.903 ************************************ 00:08:10.903 END TEST nvmf_target_discovery 00:08:10.903 ************************************ 00:08:10.903 23:48:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.903 23:48:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:10.903 23:48:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.903 23:48:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.903 ************************************ 00:08:10.903 START TEST nvmf_referrals 00:08:10.903 ************************************ 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.903 * Looking for test storage... 00:08:10.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.903 23:48:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:12.805 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:12.805 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:12.805 Found net devices under 0000:09:00.0: cvl_0_0 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:12.805 Found net devices under 0000:09:00.1: cvl_0_1 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.805 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:08:12.806 00:08:12.806 --- 10.0.0.2 ping statistics --- 00:08:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.806 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:12.806 00:08:12.806 --- 10.0.0.1 ping statistics --- 00:08:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.806 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=682039 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 682039 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 682039 ']' 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:12.806 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.806 [2024-07-24 23:48:18.367399] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:12.806 [2024-07-24 23:48:18.367489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.806 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.806 [2024-07-24 23:48:18.435816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.064 [2024-07-24 23:48:18.523557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.064 [2024-07-24 23:48:18.523602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.064 [2024-07-24 23:48:18.523631] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.064 [2024-07-24 23:48:18.523643] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.064 [2024-07-24 23:48:18.523653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.064 [2024-07-24 23:48:18.523705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.064 [2024-07-24 23:48:18.523729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.064 [2024-07-24 23:48:18.523752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.064 [2024-07-24 23:48:18.523755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 [2024-07-24 23:48:18.669889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 [2024-07-24 23:48:18.682129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.064 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.322 23:48:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.322 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.580 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.838 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.095 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 23:48:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.353 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.610 rmmod nvme_tcp 00:08:14.610 rmmod nvme_fabrics 00:08:14.610 rmmod nvme_keyring 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 682039 ']' 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 682039 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 682039 ']' 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 682039 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 682039 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 682039' 00:08:14.610 killing process with pid 682039 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 682039 00:08:14.610 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 682039 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.868 23:48:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.401 23:48:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.401 00:08:17.401 real 0m6.452s 00:08:17.401 user 0m9.105s 00:08:17.401 sys 0m2.122s 00:08:17.401 23:48:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.401 23:48:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.401 ************************************ 00:08:17.401 END TEST nvmf_referrals 00:08:17.401 ************************************ 00:08:17.401 23:48:22 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.401 23:48:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:17.401 23:48:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.401 23:48:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.401 ************************************ 00:08:17.401 START TEST nvmf_connect_disconnect 00:08:17.401 ************************************ 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.401 * Looking for test storage... 00:08:17.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.401 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.402 23:48:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:19.303 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.303 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:19.304 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:19.304 Found net devices under 0000:09:00.0: cvl_0_0 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:19.304 Found net devices under 0000:09:00.1: cvl_0_1 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:19.304 00:08:19.304 --- 10.0.0.2 ping statistics --- 00:08:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.304 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:19.304 00:08:19.304 --- 10.0.0.1 ping statistics --- 00:08:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.304 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.304 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=684339 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 684339 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 684339 ']' 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.305 23:48:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.305 [2024-07-24 23:48:24.883869] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:19.305 [2024-07-24 23:48:24.883957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.305 [2024-07-24 23:48:24.952128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.605 [2024-07-24 23:48:25.046076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.605 [2024-07-24 23:48:25.046138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.605 [2024-07-24 23:48:25.046155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.605 [2024-07-24 23:48:25.046168] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.605 [2024-07-24 23:48:25.046179] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.605 [2024-07-24 23:48:25.046235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.605 [2024-07-24 23:48:25.046288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.605 [2024-07-24 23:48:25.046402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.605 [2024-07-24 23:48:25.046405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.605 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 [2024-07-24 23:48:25.209977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 [2024-07-24 23:48:25.263473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:19.606 23:48:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:22.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.421 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.421 rmmod nvme_tcp 00:12:10.681 rmmod nvme_fabrics 00:12:10.681 rmmod nvme_keyring 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 684339 ']' 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 684339 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 684339 ']' 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 684339 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 684339 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 684339' 00:12:10.681 killing process with pid 684339 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 684339 00:12:10.681 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 684339 00:12:10.941 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.941 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.942 23:52:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.850 23:52:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.850 00:12:12.850 real 3m55.958s 00:12:12.850 user 14m58.316s 00:12:12.850 sys 0m34.762s 00:12:12.850 23:52:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:12.850 23:52:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.850 ************************************ 00:12:12.850 END TEST nvmf_connect_disconnect 00:12:12.850 ************************************ 00:12:12.850 23:52:18 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.850 23:52:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:12.850 23:52:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:12.850 23:52:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.108 ************************************ 00:12:13.108 START TEST nvmf_multitarget 00:12:13.108 ************************************ 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.108 * Looking for test storage... 00:12:13.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.108 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.109 23:52:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.012 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:15.013 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:15.013 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:15.013 Found net devices under 0000:09:00.0: cvl_0_0 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:15.013 Found net devices under 0000:09:00.1: cvl_0_1 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:12:15.013 00:12:15.013 --- 10.0.0.2 ping statistics --- 00:12:15.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.013 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:15.013 00:12:15.013 --- 10.0.0.1 ping statistics --- 00:12:15.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.013 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=715401 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 715401 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 715401 ']' 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.013 23:52:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.273 [2024-07-24 23:52:20.754855] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:15.273 [2024-07-24 23:52:20.754957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.273 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.273 [2024-07-24 23:52:20.824160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.273 [2024-07-24 23:52:20.918568] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.273 [2024-07-24 23:52:20.918631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.273 [2024-07-24 23:52:20.918657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.273 [2024-07-24 23:52:20.918670] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.273 [2024-07-24 23:52:20.918681] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.273 [2024-07-24 23:52:20.918762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.273 [2024-07-24 23:52:20.918815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.273 [2024-07-24 23:52:20.918864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.273 [2024-07-24 23:52:20.918867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:15.532 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:15.790 "nvmf_tgt_1" 00:12:15.790 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:15.790 "nvmf_tgt_2" 00:12:15.790 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.790 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:16.048 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:16.048 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:16.048 true 00:12:16.048 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:16.048 true 00:12:16.048 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.048 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.308 rmmod nvme_tcp 00:12:16.308 rmmod nvme_fabrics 00:12:16.308 rmmod nvme_keyring 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 715401 ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 715401 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 715401 ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 715401 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 715401 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 715401' 00:12:16.308 killing process with pid 715401 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 715401 00:12:16.308 23:52:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 715401 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.566 23:52:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.099 23:52:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.099 00:12:19.099 real 0m5.638s 00:12:19.099 user 0m6.250s 00:12:19.099 sys 0m1.899s 00:12:19.099 23:52:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.099 23:52:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.099 ************************************ 00:12:19.099 END TEST nvmf_multitarget 00:12:19.099 ************************************ 00:12:19.099 23:52:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.099 23:52:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.099 23:52:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.099 23:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.099 ************************************ 00:12:19.099 START TEST nvmf_rpc 00:12:19.099 ************************************ 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.099 * Looking for test storage... 00:12:19.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.099 23:52:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:20.999 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:20.999 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:20.999 Found net devices under 0000:09:00.0: cvl_0_0 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:20.999 Found net devices under 0000:09:00.1: cvl_0_1 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:12:20.999 00:12:20.999 --- 10.0.0.2 ping statistics --- 00:12:20.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.999 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:20.999 00:12:20.999 --- 10.0.0.1 ping statistics --- 00:12:20.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.999 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.999 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=717502 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 717502 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 717502 ']' 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:21.000 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.000 [2024-07-24 23:52:26.602998] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:21.000 [2024-07-24 23:52:26.603085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.000 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.000 [2024-07-24 23:52:26.671388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.257 [2024-07-24 23:52:26.768310] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.257 [2024-07-24 23:52:26.768358] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.257 [2024-07-24 23:52:26.768375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.257 [2024-07-24 23:52:26.768388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.257 [2024-07-24 23:52:26.768399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.257 [2024-07-24 23:52:26.768454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.257 [2024-07-24 23:52:26.770125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.257 [2024-07-24 23:52:26.770154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.257 [2024-07-24 23:52:26.770157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:21.257 "tick_rate": 2700000000, 00:12:21.257 "poll_groups": [ 00:12:21.257 { 00:12:21.257 "name": "nvmf_tgt_poll_group_000", 00:12:21.257 "admin_qpairs": 0, 00:12:21.257 "io_qpairs": 0, 00:12:21.257 "current_admin_qpairs": 0, 00:12:21.257 "current_io_qpairs": 0, 00:12:21.257 "pending_bdev_io": 0, 00:12:21.257 "completed_nvme_io": 0, 00:12:21.257 "transports": [] 00:12:21.257 }, 00:12:21.257 { 00:12:21.257 "name": "nvmf_tgt_poll_group_001", 00:12:21.257 "admin_qpairs": 0, 00:12:21.257 "io_qpairs": 0, 00:12:21.257 "current_admin_qpairs": 0, 00:12:21.257 "current_io_qpairs": 0, 00:12:21.257 "pending_bdev_io": 0, 00:12:21.257 "completed_nvme_io": 0, 00:12:21.257 "transports": [] 00:12:21.257 }, 00:12:21.257 { 00:12:21.257 "name": "nvmf_tgt_poll_group_002", 00:12:21.257 "admin_qpairs": 0, 00:12:21.257 "io_qpairs": 0, 00:12:21.257 "current_admin_qpairs": 0, 00:12:21.257 "current_io_qpairs": 0, 00:12:21.257 "pending_bdev_io": 0, 00:12:21.257 "completed_nvme_io": 0, 00:12:21.257 "transports": [] 00:12:21.257 }, 00:12:21.257 { 00:12:21.257 "name": "nvmf_tgt_poll_group_003", 00:12:21.257 "admin_qpairs": 0, 00:12:21.257 "io_qpairs": 0, 00:12:21.257 "current_admin_qpairs": 0, 00:12:21.257 "current_io_qpairs": 0, 00:12:21.257 "pending_bdev_io": 0, 00:12:21.257 "completed_nvme_io": 0, 00:12:21.257 "transports": [] 00:12:21.257 } 00:12:21.257 ] 00:12:21.257 }' 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:21.257 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:21.515 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:21.515 23:52:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:21.515 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:21.515 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.515 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.515 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.515 [2024-07-24 23:52:27.025387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:21.516 "tick_rate": 2700000000, 00:12:21.516 "poll_groups": [ 00:12:21.516 { 00:12:21.516 "name": "nvmf_tgt_poll_group_000", 00:12:21.516 "admin_qpairs": 0, 00:12:21.516 "io_qpairs": 0, 00:12:21.516 "current_admin_qpairs": 0, 00:12:21.516 "current_io_qpairs": 0, 00:12:21.516 "pending_bdev_io": 0, 00:12:21.516 "completed_nvme_io": 0, 00:12:21.516 "transports": [ 00:12:21.516 { 00:12:21.516 "trtype": "TCP" 00:12:21.516 } 00:12:21.516 ] 00:12:21.516 }, 00:12:21.516 { 00:12:21.516 "name": "nvmf_tgt_poll_group_001", 00:12:21.516 "admin_qpairs": 0, 00:12:21.516 "io_qpairs": 0, 00:12:21.516 "current_admin_qpairs": 0, 00:12:21.516 "current_io_qpairs": 0, 00:12:21.516 "pending_bdev_io": 0, 00:12:21.516 "completed_nvme_io": 0, 00:12:21.516 "transports": [ 00:12:21.516 { 00:12:21.516 "trtype": "TCP" 00:12:21.516 } 00:12:21.516 ] 00:12:21.516 }, 00:12:21.516 { 00:12:21.516 "name": "nvmf_tgt_poll_group_002", 00:12:21.516 "admin_qpairs": 0, 00:12:21.516 "io_qpairs": 0, 00:12:21.516 "current_admin_qpairs": 0, 00:12:21.516 "current_io_qpairs": 0, 00:12:21.516 "pending_bdev_io": 0, 00:12:21.516 "completed_nvme_io": 0, 00:12:21.516 "transports": [ 00:12:21.516 { 00:12:21.516 "trtype": "TCP" 00:12:21.516 } 00:12:21.516 ] 00:12:21.516 }, 00:12:21.516 { 00:12:21.516 "name": "nvmf_tgt_poll_group_003", 00:12:21.516 "admin_qpairs": 0, 00:12:21.516 "io_qpairs": 0, 00:12:21.516 "current_admin_qpairs": 0, 00:12:21.516 "current_io_qpairs": 0, 00:12:21.516 "pending_bdev_io": 0, 00:12:21.516 "completed_nvme_io": 0, 00:12:21.516 "transports": [ 00:12:21.516 { 00:12:21.516 "trtype": "TCP" 00:12:21.516 } 00:12:21.516 ] 00:12:21.516 } 00:12:21.516 ] 00:12:21.516 }' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 Malloc1 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 [2024-07-24 23:52:27.173743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:21.516 [2024-07-24 23:52:27.196266] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:21.516 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:21.516 could not add new controller: failed to write to nvme-fabrics device 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.516 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.775 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.775 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.344 23:52:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.344 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:22.344 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.344 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:22.344 23:52:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:24.249 23:52:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.510 [2024-07-24 23:52:29.976345] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:24.510 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:24.510 could not add new controller: failed to write to nvme-fabrics device 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.510 23:52:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.076 23:52:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.076 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:25.076 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.076 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:25.076 23:52:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.040 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.299 [2024-07-24 23:52:32.760496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.299 23:52:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.867 23:52:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.867 23:52:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:27.867 23:52:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.867 23:52:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:27.867 23:52:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:29.768 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.028 [2024-07-24 23:52:35.619680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.028 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.029 23:52:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.598 23:52:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.598 23:52:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.598 23:52:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.598 23:52:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.598 23:52:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 [2024-07-24 23:52:38.391790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.133 23:52:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.393 23:52:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.393 23:52:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.393 23:52:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.393 23:52:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.393 23:52:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 [2024-07-24 23:52:41.120871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.925 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.185 23:52:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.185 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.185 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.185 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:36.185 23:52:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.089 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 [2024-07-24 23:52:43.892502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.348 23:52:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.917 23:52:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.917 23:52:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.917 23:52:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.917 23:52:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.917 23:52:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:40.818 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 [2024-07-24 23:52:46.655039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 [2024-07-24 23:52:46.703179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 [2024-07-24 23:52:46.751308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.078 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.078 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.078 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 [2024-07-24 23:52:46.799488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 [2024-07-24 23:52:46.847634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:41.336 "tick_rate": 2700000000, 00:12:41.336 "poll_groups": [ 00:12:41.336 { 00:12:41.336 "name": "nvmf_tgt_poll_group_000", 00:12:41.336 "admin_qpairs": 2, 00:12:41.336 "io_qpairs": 84, 00:12:41.336 "current_admin_qpairs": 0, 00:12:41.336 "current_io_qpairs": 0, 00:12:41.336 "pending_bdev_io": 0, 00:12:41.336 "completed_nvme_io": 152, 00:12:41.336 "transports": [ 00:12:41.336 { 00:12:41.336 "trtype": "TCP" 00:12:41.336 } 00:12:41.336 ] 00:12:41.336 }, 00:12:41.336 { 00:12:41.336 "name": "nvmf_tgt_poll_group_001", 00:12:41.336 "admin_qpairs": 2, 00:12:41.336 "io_qpairs": 84, 00:12:41.336 "current_admin_qpairs": 0, 00:12:41.336 "current_io_qpairs": 0, 00:12:41.336 "pending_bdev_io": 0, 00:12:41.336 "completed_nvme_io": 89, 00:12:41.336 "transports": [ 00:12:41.336 { 00:12:41.336 "trtype": "TCP" 00:12:41.336 } 00:12:41.336 ] 00:12:41.336 }, 00:12:41.336 { 00:12:41.336 "name": "nvmf_tgt_poll_group_002", 00:12:41.336 "admin_qpairs": 1, 00:12:41.336 "io_qpairs": 84, 00:12:41.336 "current_admin_qpairs": 0, 00:12:41.336 "current_io_qpairs": 0, 00:12:41.336 "pending_bdev_io": 0, 00:12:41.336 "completed_nvme_io": 243, 00:12:41.336 "transports": [ 00:12:41.336 { 00:12:41.336 "trtype": "TCP" 00:12:41.336 } 00:12:41.336 ] 00:12:41.336 }, 00:12:41.336 { 00:12:41.336 "name": "nvmf_tgt_poll_group_003", 00:12:41.336 "admin_qpairs": 2, 00:12:41.336 "io_qpairs": 84, 00:12:41.336 "current_admin_qpairs": 0, 00:12:41.336 "current_io_qpairs": 0, 00:12:41.336 "pending_bdev_io": 0, 00:12:41.336 "completed_nvme_io": 202, 00:12:41.336 "transports": [ 00:12:41.336 { 00:12:41.336 "trtype": "TCP" 00:12:41.336 } 00:12:41.336 ] 00:12:41.336 } 00:12:41.336 ] 00:12:41.336 }' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.336 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.337 23:52:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.337 rmmod nvme_tcp 00:12:41.337 rmmod nvme_fabrics 00:12:41.337 rmmod nvme_keyring 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 717502 ']' 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 717502 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 717502 ']' 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 717502 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:41.337 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 717502 00:12:41.595 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:41.595 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:41.595 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 717502' 00:12:41.595 killing process with pid 717502 00:12:41.595 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 717502 00:12:41.595 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 717502 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.855 23:52:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.758 23:52:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.758 00:12:43.758 real 0m25.124s 00:12:43.758 user 1m21.380s 00:12:43.758 sys 0m4.119s 00:12:43.758 23:52:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.758 23:52:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 ************************************ 00:12:43.758 END TEST nvmf_rpc 00:12:43.758 ************************************ 00:12:43.758 23:52:49 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.758 23:52:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:43.758 23:52:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.758 23:52:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 ************************************ 00:12:43.758 START TEST nvmf_invalid 00:12:43.758 ************************************ 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.758 * Looking for test storage... 00:12:43.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.758 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.016 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.017 23:52:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:45.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:45.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.919 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:45.920 Found net devices under 0000:09:00.0: cvl_0_0 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:45.920 Found net devices under 0000:09:00.1: cvl_0_1 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.920 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:46.180 00:12:46.180 --- 10.0.0.2 ping statistics --- 00:12:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.180 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:46.180 00:12:46.180 --- 10.0.0.1 ping statistics --- 00:12:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.180 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=721986 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 721986 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 721986 ']' 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.180 23:52:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 [2024-07-24 23:52:51.739962] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:46.180 [2024-07-24 23:52:51.740051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.180 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.180 [2024-07-24 23:52:51.819649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.439 [2024-07-24 23:52:51.917678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.439 [2024-07-24 23:52:51.917723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.439 [2024-07-24 23:52:51.917751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.439 [2024-07-24 23:52:51.917763] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.439 [2024-07-24 23:52:51.917773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.439 [2024-07-24 23:52:51.917853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.439 [2024-07-24 23:52:51.917934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.439 [2024-07-24 23:52:51.919125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.439 [2024-07-24 23:52:51.919131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.439 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8126 00:12:46.696 [2024-07-24 23:52:52.343835] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:46.696 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:46.696 { 00:12:46.696 "nqn": "nqn.2016-06.io.spdk:cnode8126", 00:12:46.696 "tgt_name": "foobar", 00:12:46.696 "method": "nvmf_create_subsystem", 00:12:46.696 "req_id": 1 00:12:46.696 } 00:12:46.696 Got JSON-RPC error response 00:12:46.696 response: 00:12:46.696 { 00:12:46.696 "code": -32603, 00:12:46.696 "message": "Unable to find target foobar" 00:12:46.696 }' 00:12:46.696 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:46.696 { 00:12:46.696 "nqn": "nqn.2016-06.io.spdk:cnode8126", 00:12:46.696 "tgt_name": "foobar", 00:12:46.696 "method": "nvmf_create_subsystem", 00:12:46.696 "req_id": 1 00:12:46.696 } 00:12:46.696 Got JSON-RPC error response 00:12:46.696 response: 00:12:46.696 { 00:12:46.696 "code": -32603, 00:12:46.696 "message": "Unable to find target foobar" 00:12:46.696 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:46.696 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:46.696 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29020 00:12:47.001 [2024-07-24 23:52:52.600698] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29020: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:47.001 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:47.001 { 00:12:47.001 "nqn": "nqn.2016-06.io.spdk:cnode29020", 00:12:47.001 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:47.001 "method": "nvmf_create_subsystem", 00:12:47.001 "req_id": 1 00:12:47.001 } 00:12:47.001 Got JSON-RPC error response 00:12:47.001 response: 00:12:47.001 { 00:12:47.001 "code": -32602, 00:12:47.001 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:47.001 }' 00:12:47.001 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:47.001 { 00:12:47.001 "nqn": "nqn.2016-06.io.spdk:cnode29020", 00:12:47.001 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:47.001 "method": "nvmf_create_subsystem", 00:12:47.001 "req_id": 1 00:12:47.001 } 00:12:47.001 Got JSON-RPC error response 00:12:47.001 response: 00:12:47.001 { 00:12:47.001 "code": -32602, 00:12:47.001 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:47.001 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:47.001 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:47.001 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4859 00:12:47.259 [2024-07-24 23:52:52.853499] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4859: invalid model number 'SPDK_Controller' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:47.259 { 00:12:47.259 "nqn": "nqn.2016-06.io.spdk:cnode4859", 00:12:47.259 "model_number": "SPDK_Controller\u001f", 00:12:47.259 "method": "nvmf_create_subsystem", 00:12:47.259 "req_id": 1 00:12:47.259 } 00:12:47.259 Got JSON-RPC error response 00:12:47.259 response: 00:12:47.259 { 00:12:47.259 "code": -32602, 00:12:47.259 "message": "Invalid MN SPDK_Controller\u001f" 00:12:47.259 }' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:47.259 { 00:12:47.259 "nqn": "nqn.2016-06.io.spdk:cnode4859", 00:12:47.259 "model_number": "SPDK_Controller\u001f", 00:12:47.259 "method": "nvmf_create_subsystem", 00:12:47.259 "req_id": 1 00:12:47.259 } 00:12:47.259 Got JSON-RPC error response 00:12:47.259 response: 00:12:47.259 { 00:12:47.259 "code": -32602, 00:12:47.259 "message": "Invalid MN SPDK_Controller\u001f" 00:12:47.259 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:47.259 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R#AQ$3r=JH!e@|jv]mie' 00:12:47.260 23:52:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'R#AQ$3r=JH!e@|jv]mie' nqn.2016-06.io.spdk:cnode24003 00:12:47.519 [2024-07-24 23:52:53.222758] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24003: invalid serial number 'R#AQ$3r=JH!e@|jv]mie' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:47.777 { 00:12:47.777 "nqn": "nqn.2016-06.io.spdk:cnode24003", 00:12:47.777 "serial_number": "R#AQ$3r=JH!e@|j\u007fv]mie", 00:12:47.777 "method": "nvmf_create_subsystem", 00:12:47.777 "req_id": 1 00:12:47.777 } 00:12:47.777 Got JSON-RPC error response 00:12:47.777 response: 00:12:47.777 { 00:12:47.777 "code": -32602, 00:12:47.777 "message": "Invalid SN R#AQ$3r=JH!e@|j\u007fv]mie" 00:12:47.777 }' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:47.777 { 00:12:47.777 "nqn": "nqn.2016-06.io.spdk:cnode24003", 00:12:47.777 "serial_number": "R#AQ$3r=JH!e@|j\u007fv]mie", 00:12:47.777 "method": "nvmf_create_subsystem", 00:12:47.777 "req_id": 1 00:12:47.777 } 00:12:47.777 Got JSON-RPC error response 00:12:47.777 response: 00:12:47.777 { 00:12:47.777 "code": -32602, 00:12:47.777 "message": "Invalid SN R#AQ$3r=JH!e@|j\u007fv]mie" 00:12:47.777 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:12:47.777 23:52:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' /dev/null' 00:12:50.394 23:52:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.932 23:52:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.932 00:12:52.932 real 0m8.655s 00:12:52.932 user 0m20.086s 00:12:52.932 sys 0m2.423s 00:12:52.932 23:52:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.932 23:52:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 ************************************ 00:12:52.932 END TEST nvmf_invalid 00:12:52.932 ************************************ 00:12:52.932 23:52:58 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.932 23:52:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.932 23:52:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.932 23:52:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 ************************************ 00:12:52.932 START TEST nvmf_abort 00:12:52.932 ************************************ 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.932 * Looking for test storage... 00:12:52.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:52.932 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.933 23:52:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:54.837 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:54.837 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:54.837 Found net devices under 0000:09:00.0: cvl_0_0 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.837 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:54.838 Found net devices under 0000:09:00.1: cvl_0_1 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:54.838 00:12:54.838 --- 10.0.0.2 ping statistics --- 00:12:54.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.838 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:54.838 00:12:54.838 --- 10.0.0.1 ping statistics --- 00:12:54.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.838 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=724514 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 724514 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 724514 ']' 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.838 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.838 [2024-07-24 23:53:00.324068] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:54.838 [2024-07-24 23:53:00.324172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.838 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.838 [2024-07-24 23:53:00.395991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.838 [2024-07-24 23:53:00.488666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.838 [2024-07-24 23:53:00.488718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.838 [2024-07-24 23:53:00.488747] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.838 [2024-07-24 23:53:00.488760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.838 [2024-07-24 23:53:00.488777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.838 [2024-07-24 23:53:00.488903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.838 [2024-07-24 23:53:00.488955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.838 [2024-07-24 23:53:00.488957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 [2024-07-24 23:53:00.621600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 Malloc0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 Delay0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 [2024-07-24 23:53:00.684744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.099 23:53:00 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:55.099 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.099 [2024-07-24 23:53:00.790312] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:57.631 Initializing NVMe Controllers 00:12:57.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:57.631 controller IO queue size 128 less than required 00:12:57.631 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:57.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:57.632 Initialization complete. Launching workers. 00:12:57.632 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33566 00:12:57.632 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33627, failed to submit 62 00:12:57.632 success 33570, unsuccess 57, failed 0 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.632 23:53:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.632 rmmod nvme_tcp 00:12:57.632 rmmod nvme_fabrics 00:12:57.632 rmmod nvme_keyring 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 724514 ']' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 724514 ']' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 724514' 00:12:57.632 killing process with pid 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 724514 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.632 23:53:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.170 23:53:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.170 00:13:00.170 real 0m7.214s 00:13:00.170 user 0m10.657s 00:13:00.170 sys 0m2.495s 00:13:00.170 23:53:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.170 23:53:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:00.170 ************************************ 00:13:00.170 END TEST nvmf_abort 00:13:00.170 ************************************ 00:13:00.170 23:53:05 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.170 23:53:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.170 23:53:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.170 23:53:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.170 ************************************ 00:13:00.170 START TEST nvmf_ns_hotplug_stress 00:13:00.170 ************************************ 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.170 * Looking for test storage... 00:13:00.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.170 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.171 23:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:02.078 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:02.078 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:02.078 Found net devices under 0000:09:00.0: cvl_0_0 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:02.078 Found net devices under 0000:09:00.1: cvl_0_1 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:13:02.078 00:13:02.078 --- 10.0.0.2 ping statistics --- 00:13:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.078 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:02.078 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:13:02.078 00:13:02.078 --- 10.0.0.1 ping statistics --- 00:13:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.079 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=726844 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 726844 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 726844 ']' 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.079 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.079 [2024-07-24 23:53:07.569649] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:02.079 [2024-07-24 23:53:07.569727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.079 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.079 [2024-07-24 23:53:07.632295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.079 [2024-07-24 23:53:07.717738] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.079 [2024-07-24 23:53:07.717794] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.079 [2024-07-24 23:53:07.717823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.079 [2024-07-24 23:53:07.717834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.079 [2024-07-24 23:53:07.717844] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.079 [2024-07-24 23:53:07.717942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.079 [2024-07-24 23:53:07.718007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.079 [2024-07-24 23:53:07.718009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:02.338 23:53:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:02.596 [2024-07-24 23:53:08.133462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.596 23:53:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.854 23:53:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.112 [2024-07-24 23:53:08.676288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.112 23:53:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.369 23:53:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:03.627 Malloc0 00:13:03.627 23:53:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.885 Delay0 00:13:03.885 23:53:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.143 23:53:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:04.401 NULL1 00:13:04.401 23:53:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:04.658 23:53:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=727144 00:13:04.658 23:53:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:04.658 23:53:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:04.658 23:53:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.659 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.038 Read completed with error (sct=0, sc=11) 00:13:06.038 23:53:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.296 23:53:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:06.296 23:53:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:06.555 true 00:13:06.555 23:53:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:06.555 23:53:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.123 23:53:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.688 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:07.688 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:07.688 true 00:13:07.688 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:07.688 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.946 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.510 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:08.510 23:53:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:08.510 true 00:13:08.510 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:08.510 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.768 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.026 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:09.026 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:09.284 true 00:13:09.284 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:09.284 23:53:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.663 23:53:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.663 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:10.663 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:10.921 true 00:13:10.921 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:10.921 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.185 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.489 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:11.489 23:53:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:11.747 true 00:13:11.747 23:53:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:11.747 23:53:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.312 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.876 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:12.877 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:12.877 true 00:13:13.133 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:13.133 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.133 23:53:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.391 23:53:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:13.391 23:53:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:13.648 true 00:13:13.648 23:53:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:13.648 23:53:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.580 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.837 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:14.838 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:15.095 true 00:13:15.095 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:15.095 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.351 23:53:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.608 23:53:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:15.608 23:53:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:15.865 true 00:13:15.865 23:53:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:15.865 23:53:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.796 23:53:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.796 23:53:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:16.796 23:53:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:17.053 true 00:13:17.053 23:53:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:17.053 23:53:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.311 23:53:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.569 23:53:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:17.569 23:53:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:17.826 true 00:13:17.826 23:53:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:17.826 23:53:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.757 23:53:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.014 23:53:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:19.014 23:53:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:19.271 true 00:13:19.271 23:53:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:19.271 23:53:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.529 23:53:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.786 23:53:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:19.786 23:53:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:20.043 true 00:13:20.043 23:53:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:20.043 23:53:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.975 23:53:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.232 23:53:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:21.232 23:53:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:21.490 true 00:13:21.490 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:21.490 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.747 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.005 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:22.005 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:22.262 true 00:13:22.262 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:22.262 23:53:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.193 23:53:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.193 23:53:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:23.193 23:53:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:23.450 true 00:13:23.450 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:23.450 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.708 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.965 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:23.965 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:24.222 true 00:13:24.223 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:24.223 23:53:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.154 23:53:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.411 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:25.411 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:25.668 true 00:13:25.668 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:25.668 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.925 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.183 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:26.183 23:53:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:26.441 true 00:13:26.441 23:53:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:26.441 23:53:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.373 23:53:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.630 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:27.630 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:27.630 true 00:13:27.887 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:27.887 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.887 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.149 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:28.149 23:53:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:28.446 true 00:13:28.446 23:53:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:28.446 23:53:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.381 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.639 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:29.639 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:30.204 true 00:13:30.204 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:30.204 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.461 23:53:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.719 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:30.719 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:30.719 true 00:13:30.977 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:30.977 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.977 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.234 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:31.234 23:53:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:31.491 true 00:13:31.491 23:53:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:31.491 23:53:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.862 23:53:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.862 23:53:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:32.862 23:53:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:33.119 true 00:13:33.119 23:53:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:33.119 23:53:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.052 23:53:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.052 23:53:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:34.052 23:53:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:34.309 true 00:13:34.309 23:53:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:34.309 23:53:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.567 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.823 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:34.823 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:34.823 Initializing NVMe Controllers 00:13:34.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.823 Controller IO queue size 128, less than required. 00:13:34.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.823 Controller IO queue size 128, less than required. 00:13:34.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:34.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:34.823 Initialization complete. Launching workers. 00:13:34.823 ======================================================== 00:13:34.823 Latency(us) 00:13:34.823 Device Information : IOPS MiB/s Average min max 00:13:34.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1092.90 0.53 62483.52 2884.21 1013209.51 00:13:34.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10586.54 5.17 12092.20 4148.74 460924.95 00:13:34.824 ======================================================== 00:13:34.824 Total : 11679.44 5.70 16807.55 2884.21 1013209.51 00:13:34.824 00:13:35.080 true 00:13:35.080 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 727144 00:13:35.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (727144) - No such process 00:13:35.080 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 727144 00:13:35.080 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.337 23:53:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.594 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:35.594 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:35.594 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:35.594 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.594 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:35.851 null0 00:13:35.851 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.851 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.851 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:36.108 null1 00:13:36.108 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.108 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.108 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:36.366 null2 00:13:36.366 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.366 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.366 23:53:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:36.624 null3 00:13:36.624 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.624 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.624 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:36.881 null4 00:13:36.881 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.881 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.881 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:37.139 null5 00:13:37.139 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.139 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.139 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:37.396 null6 00:13:37.396 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.396 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.396 23:53:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:37.654 null7 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 731168 731169 731171 731173 731175 731177 731179 731181 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.654 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.911 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.168 23:53:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.426 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.683 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.684 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.941 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.198 23:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.454 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.455 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.712 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.968 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.226 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.484 23:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.741 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.999 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.999 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.999 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.000 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.258 23:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.515 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.516 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.773 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.050 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.051 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.308 23:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.566 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.567 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.825 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.083 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.084 rmmod nvme_tcp 00:13:43.084 rmmod nvme_fabrics 00:13:43.084 rmmod nvme_keyring 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 726844 ']' 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 726844 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 726844 ']' 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 726844 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 726844 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 726844' 00:13:43.084 killing process with pid 726844 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 726844 00:13:43.084 23:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 726844 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.342 23:53:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.874 23:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.874 00:13:45.874 real 0m45.674s 00:13:45.874 user 3m28.927s 00:13:45.874 sys 0m16.157s 00:13:45.874 23:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.874 23:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.874 ************************************ 00:13:45.874 END TEST nvmf_ns_hotplug_stress 00:13:45.874 ************************************ 00:13:45.874 23:53:51 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.875 23:53:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:45.875 23:53:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.875 23:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.875 ************************************ 00:13:45.875 START TEST nvmf_connect_stress 00:13:45.875 ************************************ 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.875 * Looking for test storage... 00:13:45.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.875 23:53:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:47.801 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:47.801 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:47.801 Found net devices under 0000:09:00.0: cvl_0_0 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:47.801 Found net devices under 0000:09:00.1: cvl_0_1 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:13:47.801 00:13:47.801 --- 10.0.0.2 ping statistics --- 00:13:47.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.801 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:13:47.801 00:13:47.801 --- 10.0.0.1 ping statistics --- 00:13:47.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.801 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=733930 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 733930 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 733930 ']' 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:47.801 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.801 [2024-07-24 23:53:53.288698] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:47.801 [2024-07-24 23:53:53.288788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.801 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.801 [2024-07-24 23:53:53.357647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.801 [2024-07-24 23:53:53.449615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.801 [2024-07-24 23:53:53.449677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.801 [2024-07-24 23:53:53.449694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.801 [2024-07-24 23:53:53.449707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.801 [2024-07-24 23:53:53.449718] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.801 [2024-07-24 23:53:53.449827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.801 [2024-07-24 23:53:53.449916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.801 [2024-07-24 23:53:53.449919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 [2024-07-24 23:53:53.597363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 [2024-07-24 23:53:53.628263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 NULL1 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.058 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=733961 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.059 23:53:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.316 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.316 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:48.316 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.316 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.316 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.880 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.880 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:48.880 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.880 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.880 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.137 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.137 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:49.137 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.137 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.137 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.394 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.394 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:49.394 23:53:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.394 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.394 23:53:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.651 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.651 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:49.651 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.651 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.651 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.908 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.908 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:49.908 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.908 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.908 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.472 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:50.472 23:53:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.472 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.472 23:53:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.729 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.729 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:50.729 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.729 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.729 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.986 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.986 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:50.986 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.986 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.986 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.243 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.243 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:51.243 23:53:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.243 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.243 23:53:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.500 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.757 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:51.757 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.757 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.757 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.014 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.014 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:52.014 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.014 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.014 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.271 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.271 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:52.271 23:53:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.271 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.271 23:53:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.529 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.529 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:52.529 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.529 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.529 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.786 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.786 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:52.786 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.786 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.786 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.351 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.351 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:53.351 23:53:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.351 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.351 23:53:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.607 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.607 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:53.607 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.607 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.607 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.863 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.863 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:53.863 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.863 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.863 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.121 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.121 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:54.121 23:53:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.121 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.121 23:53:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.685 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.685 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:54.685 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.685 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.685 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.942 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.942 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:54.942 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.942 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.942 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.199 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.200 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:55.200 23:54:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.200 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.200 23:54:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.457 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.457 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:55.457 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.457 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.457 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.717 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.717 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:55.717 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.717 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.717 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.281 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.281 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:56.281 23:54:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.281 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.281 23:54:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.538 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.538 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:56.538 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.538 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.538 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.796 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.796 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:56.796 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.796 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.796 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.054 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.054 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:57.054 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.054 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.054 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.312 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.312 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:57.312 23:54:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.312 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.312 23:54:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.878 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.878 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:57.878 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.878 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.878 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.136 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.136 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:58.136 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.136 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.136 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.136 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 733961 00:13:58.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (733961) - No such process 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 733961 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.394 23:54:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.394 rmmod nvme_tcp 00:13:58.394 rmmod nvme_fabrics 00:13:58.394 rmmod nvme_keyring 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 733930 ']' 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 733930 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 733930 ']' 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 733930 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 733930 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 733930' 00:13:58.394 killing process with pid 733930 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 733930 00:13:58.394 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 733930 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.653 23:54:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.188 23:54:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.188 00:14:01.188 real 0m15.243s 00:14:01.188 user 0m38.301s 00:14:01.188 sys 0m5.776s 00:14:01.188 23:54:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.188 23:54:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.188 ************************************ 00:14:01.188 END TEST nvmf_connect_stress 00:14:01.188 ************************************ 00:14:01.188 23:54:06 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.188 23:54:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:01.188 23:54:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:01.188 23:54:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.188 ************************************ 00:14:01.188 START TEST nvmf_fused_ordering 00:14:01.188 ************************************ 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.188 * Looking for test storage... 00:14:01.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.188 23:54:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.189 23:54:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.189 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.189 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.189 23:54:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.189 23:54:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:03.092 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.092 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:03.093 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:03.093 Found net devices under 0000:09:00.0: cvl_0_0 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:03.093 Found net devices under 0000:09:00.1: cvl_0_1 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:14:03.093 00:14:03.093 --- 10.0.0.2 ping statistics --- 00:14:03.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.093 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:03.093 00:14:03.093 --- 10.0.0.1 ping statistics --- 00:14:03.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.093 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=737570 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 737570 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 737570 ']' 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:03.093 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.093 [2024-07-24 23:54:08.583766] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:03.093 [2024-07-24 23:54:08.583851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.093 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.093 [2024-07-24 23:54:08.648703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.093 [2024-07-24 23:54:08.738413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.093 [2024-07-24 23:54:08.738475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.093 [2024-07-24 23:54:08.738488] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.093 [2024-07-24 23:54:08.738500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.093 [2024-07-24 23:54:08.738524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.093 [2024-07-24 23:54:08.738557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 [2024-07-24 23:54:08.878642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 [2024-07-24 23:54:08.894865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 NULL1 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.352 23:54:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.353 23:54:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:03.353 [2024-07-24 23:54:08.938543] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:03.353 [2024-07-24 23:54:08.938584] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737712 ] 00:14:03.353 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.920 Attached to nqn.2016-06.io.spdk:cnode1 00:14:03.920 Namespace ID: 1 size: 1GB 00:14:03.920 fused_ordering(0) 00:14:03.920 fused_ordering(1) 00:14:03.920 fused_ordering(2) 00:14:03.920 fused_ordering(3) 00:14:03.920 fused_ordering(4) 00:14:03.920 fused_ordering(5) 00:14:03.920 fused_ordering(6) 00:14:03.920 fused_ordering(7) 00:14:03.920 fused_ordering(8) 00:14:03.920 fused_ordering(9) 00:14:03.920 fused_ordering(10) 00:14:03.920 fused_ordering(11) 00:14:03.920 fused_ordering(12) 00:14:03.920 fused_ordering(13) 00:14:03.920 fused_ordering(14) 00:14:03.920 fused_ordering(15) 00:14:03.920 fused_ordering(16) 00:14:03.920 fused_ordering(17) 00:14:03.920 fused_ordering(18) 00:14:03.920 fused_ordering(19) 00:14:03.920 fused_ordering(20) 00:14:03.920 fused_ordering(21) 00:14:03.920 fused_ordering(22) 00:14:03.920 fused_ordering(23) 00:14:03.920 fused_ordering(24) 00:14:03.920 fused_ordering(25) 00:14:03.920 fused_ordering(26) 00:14:03.920 fused_ordering(27) 00:14:03.920 fused_ordering(28) 00:14:03.920 fused_ordering(29) 00:14:03.920 fused_ordering(30) 00:14:03.920 fused_ordering(31) 00:14:03.920 fused_ordering(32) 00:14:03.920 fused_ordering(33) 00:14:03.920 fused_ordering(34) 00:14:03.920 fused_ordering(35) 00:14:03.920 fused_ordering(36) 00:14:03.920 fused_ordering(37) 00:14:03.920 fused_ordering(38) 00:14:03.920 fused_ordering(39) 00:14:03.920 fused_ordering(40) 00:14:03.920 fused_ordering(41) 00:14:03.920 fused_ordering(42) 00:14:03.920 fused_ordering(43) 00:14:03.920 fused_ordering(44) 00:14:03.920 fused_ordering(45) 00:14:03.920 fused_ordering(46) 00:14:03.920 fused_ordering(47) 00:14:03.920 fused_ordering(48) 00:14:03.920 fused_ordering(49) 00:14:03.920 fused_ordering(50) 00:14:03.920 fused_ordering(51) 00:14:03.920 fused_ordering(52) 00:14:03.920 fused_ordering(53) 00:14:03.920 fused_ordering(54) 00:14:03.920 fused_ordering(55) 00:14:03.920 fused_ordering(56) 00:14:03.920 fused_ordering(57) 00:14:03.920 fused_ordering(58) 00:14:03.920 fused_ordering(59) 00:14:03.920 fused_ordering(60) 00:14:03.920 fused_ordering(61) 00:14:03.920 fused_ordering(62) 00:14:03.920 fused_ordering(63) 00:14:03.920 fused_ordering(64) 00:14:03.920 fused_ordering(65) 00:14:03.920 fused_ordering(66) 00:14:03.920 fused_ordering(67) 00:14:03.920 fused_ordering(68) 00:14:03.920 fused_ordering(69) 00:14:03.920 fused_ordering(70) 00:14:03.920 fused_ordering(71) 00:14:03.920 fused_ordering(72) 00:14:03.920 fused_ordering(73) 00:14:03.920 fused_ordering(74) 00:14:03.920 fused_ordering(75) 00:14:03.920 fused_ordering(76) 00:14:03.920 fused_ordering(77) 00:14:03.920 fused_ordering(78) 00:14:03.920 fused_ordering(79) 00:14:03.920 fused_ordering(80) 00:14:03.920 fused_ordering(81) 00:14:03.920 fused_ordering(82) 00:14:03.920 fused_ordering(83) 00:14:03.920 fused_ordering(84) 00:14:03.920 fused_ordering(85) 00:14:03.920 fused_ordering(86) 00:14:03.920 fused_ordering(87) 00:14:03.920 fused_ordering(88) 00:14:03.920 fused_ordering(89) 00:14:03.920 fused_ordering(90) 00:14:03.920 fused_ordering(91) 00:14:03.920 fused_ordering(92) 00:14:03.920 fused_ordering(93) 00:14:03.920 fused_ordering(94) 00:14:03.920 fused_ordering(95) 00:14:03.920 fused_ordering(96) 00:14:03.920 fused_ordering(97) 00:14:03.920 fused_ordering(98) 00:14:03.920 fused_ordering(99) 00:14:03.920 fused_ordering(100) 00:14:03.920 fused_ordering(101) 00:14:03.920 fused_ordering(102) 00:14:03.920 fused_ordering(103) 00:14:03.920 fused_ordering(104) 00:14:03.920 fused_ordering(105) 00:14:03.920 fused_ordering(106) 00:14:03.920 fused_ordering(107) 00:14:03.920 fused_ordering(108) 00:14:03.920 fused_ordering(109) 00:14:03.920 fused_ordering(110) 00:14:03.920 fused_ordering(111) 00:14:03.920 fused_ordering(112) 00:14:03.920 fused_ordering(113) 00:14:03.920 fused_ordering(114) 00:14:03.920 fused_ordering(115) 00:14:03.920 fused_ordering(116) 00:14:03.920 fused_ordering(117) 00:14:03.920 fused_ordering(118) 00:14:03.920 fused_ordering(119) 00:14:03.920 fused_ordering(120) 00:14:03.920 fused_ordering(121) 00:14:03.920 fused_ordering(122) 00:14:03.920 fused_ordering(123) 00:14:03.920 fused_ordering(124) 00:14:03.920 fused_ordering(125) 00:14:03.920 fused_ordering(126) 00:14:03.920 fused_ordering(127) 00:14:03.920 fused_ordering(128) 00:14:03.920 fused_ordering(129) 00:14:03.920 fused_ordering(130) 00:14:03.920 fused_ordering(131) 00:14:03.920 fused_ordering(132) 00:14:03.920 fused_ordering(133) 00:14:03.920 fused_ordering(134) 00:14:03.920 fused_ordering(135) 00:14:03.920 fused_ordering(136) 00:14:03.920 fused_ordering(137) 00:14:03.920 fused_ordering(138) 00:14:03.920 fused_ordering(139) 00:14:03.920 fused_ordering(140) 00:14:03.921 fused_ordering(141) 00:14:03.921 fused_ordering(142) 00:14:03.921 fused_ordering(143) 00:14:03.921 fused_ordering(144) 00:14:03.921 fused_ordering(145) 00:14:03.921 fused_ordering(146) 00:14:03.921 fused_ordering(147) 00:14:03.921 fused_ordering(148) 00:14:03.921 fused_ordering(149) 00:14:03.921 fused_ordering(150) 00:14:03.921 fused_ordering(151) 00:14:03.921 fused_ordering(152) 00:14:03.921 fused_ordering(153) 00:14:03.921 fused_ordering(154) 00:14:03.921 fused_ordering(155) 00:14:03.921 fused_ordering(156) 00:14:03.921 fused_ordering(157) 00:14:03.921 fused_ordering(158) 00:14:03.921 fused_ordering(159) 00:14:03.921 fused_ordering(160) 00:14:03.921 fused_ordering(161) 00:14:03.921 fused_ordering(162) 00:14:03.921 fused_ordering(163) 00:14:03.921 fused_ordering(164) 00:14:03.921 fused_ordering(165) 00:14:03.921 fused_ordering(166) 00:14:03.921 fused_ordering(167) 00:14:03.921 fused_ordering(168) 00:14:03.921 fused_ordering(169) 00:14:03.921 fused_ordering(170) 00:14:03.921 fused_ordering(171) 00:14:03.921 fused_ordering(172) 00:14:03.921 fused_ordering(173) 00:14:03.921 fused_ordering(174) 00:14:03.921 fused_ordering(175) 00:14:03.921 fused_ordering(176) 00:14:03.921 fused_ordering(177) 00:14:03.921 fused_ordering(178) 00:14:03.921 fused_ordering(179) 00:14:03.921 fused_ordering(180) 00:14:03.921 fused_ordering(181) 00:14:03.921 fused_ordering(182) 00:14:03.921 fused_ordering(183) 00:14:03.921 fused_ordering(184) 00:14:03.921 fused_ordering(185) 00:14:03.921 fused_ordering(186) 00:14:03.921 fused_ordering(187) 00:14:03.921 fused_ordering(188) 00:14:03.921 fused_ordering(189) 00:14:03.921 fused_ordering(190) 00:14:03.921 fused_ordering(191) 00:14:03.921 fused_ordering(192) 00:14:03.921 fused_ordering(193) 00:14:03.921 fused_ordering(194) 00:14:03.921 fused_ordering(195) 00:14:03.921 fused_ordering(196) 00:14:03.921 fused_ordering(197) 00:14:03.921 fused_ordering(198) 00:14:03.921 fused_ordering(199) 00:14:03.921 fused_ordering(200) 00:14:03.921 fused_ordering(201) 00:14:03.921 fused_ordering(202) 00:14:03.921 fused_ordering(203) 00:14:03.921 fused_ordering(204) 00:14:03.921 fused_ordering(205) 00:14:04.488 fused_ordering(206) 00:14:04.488 fused_ordering(207) 00:14:04.488 fused_ordering(208) 00:14:04.488 fused_ordering(209) 00:14:04.488 fused_ordering(210) 00:14:04.488 fused_ordering(211) 00:14:04.488 fused_ordering(212) 00:14:04.488 fused_ordering(213) 00:14:04.488 fused_ordering(214) 00:14:04.488 fused_ordering(215) 00:14:04.488 fused_ordering(216) 00:14:04.488 fused_ordering(217) 00:14:04.488 fused_ordering(218) 00:14:04.488 fused_ordering(219) 00:14:04.488 fused_ordering(220) 00:14:04.488 fused_ordering(221) 00:14:04.488 fused_ordering(222) 00:14:04.488 fused_ordering(223) 00:14:04.488 fused_ordering(224) 00:14:04.488 fused_ordering(225) 00:14:04.488 fused_ordering(226) 00:14:04.488 fused_ordering(227) 00:14:04.488 fused_ordering(228) 00:14:04.488 fused_ordering(229) 00:14:04.488 fused_ordering(230) 00:14:04.488 fused_ordering(231) 00:14:04.488 fused_ordering(232) 00:14:04.488 fused_ordering(233) 00:14:04.488 fused_ordering(234) 00:14:04.488 fused_ordering(235) 00:14:04.488 fused_ordering(236) 00:14:04.488 fused_ordering(237) 00:14:04.488 fused_ordering(238) 00:14:04.488 fused_ordering(239) 00:14:04.488 fused_ordering(240) 00:14:04.488 fused_ordering(241) 00:14:04.488 fused_ordering(242) 00:14:04.488 fused_ordering(243) 00:14:04.488 fused_ordering(244) 00:14:04.488 fused_ordering(245) 00:14:04.488 fused_ordering(246) 00:14:04.488 fused_ordering(247) 00:14:04.488 fused_ordering(248) 00:14:04.488 fused_ordering(249) 00:14:04.488 fused_ordering(250) 00:14:04.488 fused_ordering(251) 00:14:04.488 fused_ordering(252) 00:14:04.488 fused_ordering(253) 00:14:04.488 fused_ordering(254) 00:14:04.488 fused_ordering(255) 00:14:04.488 fused_ordering(256) 00:14:04.488 fused_ordering(257) 00:14:04.488 fused_ordering(258) 00:14:04.488 fused_ordering(259) 00:14:04.488 fused_ordering(260) 00:14:04.488 fused_ordering(261) 00:14:04.488 fused_ordering(262) 00:14:04.488 fused_ordering(263) 00:14:04.488 fused_ordering(264) 00:14:04.488 fused_ordering(265) 00:14:04.488 fused_ordering(266) 00:14:04.488 fused_ordering(267) 00:14:04.488 fused_ordering(268) 00:14:04.488 fused_ordering(269) 00:14:04.488 fused_ordering(270) 00:14:04.488 fused_ordering(271) 00:14:04.488 fused_ordering(272) 00:14:04.488 fused_ordering(273) 00:14:04.488 fused_ordering(274) 00:14:04.488 fused_ordering(275) 00:14:04.488 fused_ordering(276) 00:14:04.488 fused_ordering(277) 00:14:04.488 fused_ordering(278) 00:14:04.488 fused_ordering(279) 00:14:04.488 fused_ordering(280) 00:14:04.488 fused_ordering(281) 00:14:04.488 fused_ordering(282) 00:14:04.488 fused_ordering(283) 00:14:04.488 fused_ordering(284) 00:14:04.488 fused_ordering(285) 00:14:04.488 fused_ordering(286) 00:14:04.488 fused_ordering(287) 00:14:04.488 fused_ordering(288) 00:14:04.488 fused_ordering(289) 00:14:04.488 fused_ordering(290) 00:14:04.488 fused_ordering(291) 00:14:04.488 fused_ordering(292) 00:14:04.488 fused_ordering(293) 00:14:04.488 fused_ordering(294) 00:14:04.488 fused_ordering(295) 00:14:04.488 fused_ordering(296) 00:14:04.488 fused_ordering(297) 00:14:04.488 fused_ordering(298) 00:14:04.488 fused_ordering(299) 00:14:04.488 fused_ordering(300) 00:14:04.488 fused_ordering(301) 00:14:04.488 fused_ordering(302) 00:14:04.488 fused_ordering(303) 00:14:04.488 fused_ordering(304) 00:14:04.488 fused_ordering(305) 00:14:04.488 fused_ordering(306) 00:14:04.488 fused_ordering(307) 00:14:04.488 fused_ordering(308) 00:14:04.488 fused_ordering(309) 00:14:04.488 fused_ordering(310) 00:14:04.488 fused_ordering(311) 00:14:04.488 fused_ordering(312) 00:14:04.488 fused_ordering(313) 00:14:04.488 fused_ordering(314) 00:14:04.488 fused_ordering(315) 00:14:04.488 fused_ordering(316) 00:14:04.488 fused_ordering(317) 00:14:04.488 fused_ordering(318) 00:14:04.488 fused_ordering(319) 00:14:04.488 fused_ordering(320) 00:14:04.488 fused_ordering(321) 00:14:04.488 fused_ordering(322) 00:14:04.488 fused_ordering(323) 00:14:04.488 fused_ordering(324) 00:14:04.488 fused_ordering(325) 00:14:04.488 fused_ordering(326) 00:14:04.488 fused_ordering(327) 00:14:04.488 fused_ordering(328) 00:14:04.488 fused_ordering(329) 00:14:04.488 fused_ordering(330) 00:14:04.488 fused_ordering(331) 00:14:04.488 fused_ordering(332) 00:14:04.488 fused_ordering(333) 00:14:04.488 fused_ordering(334) 00:14:04.488 fused_ordering(335) 00:14:04.488 fused_ordering(336) 00:14:04.488 fused_ordering(337) 00:14:04.488 fused_ordering(338) 00:14:04.488 fused_ordering(339) 00:14:04.488 fused_ordering(340) 00:14:04.488 fused_ordering(341) 00:14:04.488 fused_ordering(342) 00:14:04.488 fused_ordering(343) 00:14:04.488 fused_ordering(344) 00:14:04.488 fused_ordering(345) 00:14:04.488 fused_ordering(346) 00:14:04.488 fused_ordering(347) 00:14:04.488 fused_ordering(348) 00:14:04.488 fused_ordering(349) 00:14:04.488 fused_ordering(350) 00:14:04.488 fused_ordering(351) 00:14:04.488 fused_ordering(352) 00:14:04.488 fused_ordering(353) 00:14:04.488 fused_ordering(354) 00:14:04.488 fused_ordering(355) 00:14:04.488 fused_ordering(356) 00:14:04.488 fused_ordering(357) 00:14:04.488 fused_ordering(358) 00:14:04.488 fused_ordering(359) 00:14:04.488 fused_ordering(360) 00:14:04.488 fused_ordering(361) 00:14:04.488 fused_ordering(362) 00:14:04.488 fused_ordering(363) 00:14:04.488 fused_ordering(364) 00:14:04.488 fused_ordering(365) 00:14:04.488 fused_ordering(366) 00:14:04.488 fused_ordering(367) 00:14:04.488 fused_ordering(368) 00:14:04.488 fused_ordering(369) 00:14:04.488 fused_ordering(370) 00:14:04.488 fused_ordering(371) 00:14:04.488 fused_ordering(372) 00:14:04.488 fused_ordering(373) 00:14:04.488 fused_ordering(374) 00:14:04.488 fused_ordering(375) 00:14:04.488 fused_ordering(376) 00:14:04.488 fused_ordering(377) 00:14:04.488 fused_ordering(378) 00:14:04.488 fused_ordering(379) 00:14:04.488 fused_ordering(380) 00:14:04.488 fused_ordering(381) 00:14:04.488 fused_ordering(382) 00:14:04.488 fused_ordering(383) 00:14:04.488 fused_ordering(384) 00:14:04.488 fused_ordering(385) 00:14:04.488 fused_ordering(386) 00:14:04.488 fused_ordering(387) 00:14:04.488 fused_ordering(388) 00:14:04.488 fused_ordering(389) 00:14:04.488 fused_ordering(390) 00:14:04.488 fused_ordering(391) 00:14:04.488 fused_ordering(392) 00:14:04.488 fused_ordering(393) 00:14:04.488 fused_ordering(394) 00:14:04.488 fused_ordering(395) 00:14:04.488 fused_ordering(396) 00:14:04.488 fused_ordering(397) 00:14:04.488 fused_ordering(398) 00:14:04.488 fused_ordering(399) 00:14:04.488 fused_ordering(400) 00:14:04.488 fused_ordering(401) 00:14:04.488 fused_ordering(402) 00:14:04.488 fused_ordering(403) 00:14:04.488 fused_ordering(404) 00:14:04.488 fused_ordering(405) 00:14:04.489 fused_ordering(406) 00:14:04.489 fused_ordering(407) 00:14:04.489 fused_ordering(408) 00:14:04.489 fused_ordering(409) 00:14:04.489 fused_ordering(410) 00:14:05.085 fused_ordering(411) 00:14:05.085 fused_ordering(412) 00:14:05.085 fused_ordering(413) 00:14:05.085 fused_ordering(414) 00:14:05.085 fused_ordering(415) 00:14:05.085 fused_ordering(416) 00:14:05.085 fused_ordering(417) 00:14:05.085 fused_ordering(418) 00:14:05.085 fused_ordering(419) 00:14:05.085 fused_ordering(420) 00:14:05.085 fused_ordering(421) 00:14:05.085 fused_ordering(422) 00:14:05.085 fused_ordering(423) 00:14:05.085 fused_ordering(424) 00:14:05.085 fused_ordering(425) 00:14:05.085 fused_ordering(426) 00:14:05.085 fused_ordering(427) 00:14:05.085 fused_ordering(428) 00:14:05.085 fused_ordering(429) 00:14:05.085 fused_ordering(430) 00:14:05.085 fused_ordering(431) 00:14:05.085 fused_ordering(432) 00:14:05.085 fused_ordering(433) 00:14:05.085 fused_ordering(434) 00:14:05.085 fused_ordering(435) 00:14:05.085 fused_ordering(436) 00:14:05.085 fused_ordering(437) 00:14:05.085 fused_ordering(438) 00:14:05.085 fused_ordering(439) 00:14:05.085 fused_ordering(440) 00:14:05.086 fused_ordering(441) 00:14:05.086 fused_ordering(442) 00:14:05.086 fused_ordering(443) 00:14:05.086 fused_ordering(444) 00:14:05.086 fused_ordering(445) 00:14:05.086 fused_ordering(446) 00:14:05.086 fused_ordering(447) 00:14:05.086 fused_ordering(448) 00:14:05.086 fused_ordering(449) 00:14:05.086 fused_ordering(450) 00:14:05.086 fused_ordering(451) 00:14:05.086 fused_ordering(452) 00:14:05.086 fused_ordering(453) 00:14:05.086 fused_ordering(454) 00:14:05.086 fused_ordering(455) 00:14:05.086 fused_ordering(456) 00:14:05.086 fused_ordering(457) 00:14:05.086 fused_ordering(458) 00:14:05.086 fused_ordering(459) 00:14:05.086 fused_ordering(460) 00:14:05.086 fused_ordering(461) 00:14:05.086 fused_ordering(462) 00:14:05.086 fused_ordering(463) 00:14:05.086 fused_ordering(464) 00:14:05.086 fused_ordering(465) 00:14:05.086 fused_ordering(466) 00:14:05.086 fused_ordering(467) 00:14:05.086 fused_ordering(468) 00:14:05.086 fused_ordering(469) 00:14:05.086 fused_ordering(470) 00:14:05.086 fused_ordering(471) 00:14:05.086 fused_ordering(472) 00:14:05.086 fused_ordering(473) 00:14:05.086 fused_ordering(474) 00:14:05.086 fused_ordering(475) 00:14:05.086 fused_ordering(476) 00:14:05.086 fused_ordering(477) 00:14:05.086 fused_ordering(478) 00:14:05.086 fused_ordering(479) 00:14:05.086 fused_ordering(480) 00:14:05.086 fused_ordering(481) 00:14:05.086 fused_ordering(482) 00:14:05.086 fused_ordering(483) 00:14:05.086 fused_ordering(484) 00:14:05.086 fused_ordering(485) 00:14:05.086 fused_ordering(486) 00:14:05.086 fused_ordering(487) 00:14:05.086 fused_ordering(488) 00:14:05.086 fused_ordering(489) 00:14:05.086 fused_ordering(490) 00:14:05.086 fused_ordering(491) 00:14:05.086 fused_ordering(492) 00:14:05.086 fused_ordering(493) 00:14:05.086 fused_ordering(494) 00:14:05.086 fused_ordering(495) 00:14:05.086 fused_ordering(496) 00:14:05.086 fused_ordering(497) 00:14:05.086 fused_ordering(498) 00:14:05.086 fused_ordering(499) 00:14:05.086 fused_ordering(500) 00:14:05.086 fused_ordering(501) 00:14:05.086 fused_ordering(502) 00:14:05.086 fused_ordering(503) 00:14:05.086 fused_ordering(504) 00:14:05.086 fused_ordering(505) 00:14:05.086 fused_ordering(506) 00:14:05.086 fused_ordering(507) 00:14:05.086 fused_ordering(508) 00:14:05.086 fused_ordering(509) 00:14:05.086 fused_ordering(510) 00:14:05.086 fused_ordering(511) 00:14:05.086 fused_ordering(512) 00:14:05.086 fused_ordering(513) 00:14:05.086 fused_ordering(514) 00:14:05.086 fused_ordering(515) 00:14:05.086 fused_ordering(516) 00:14:05.086 fused_ordering(517) 00:14:05.086 fused_ordering(518) 00:14:05.086 fused_ordering(519) 00:14:05.086 fused_ordering(520) 00:14:05.086 fused_ordering(521) 00:14:05.086 fused_ordering(522) 00:14:05.086 fused_ordering(523) 00:14:05.086 fused_ordering(524) 00:14:05.086 fused_ordering(525) 00:14:05.086 fused_ordering(526) 00:14:05.086 fused_ordering(527) 00:14:05.086 fused_ordering(528) 00:14:05.086 fused_ordering(529) 00:14:05.086 fused_ordering(530) 00:14:05.086 fused_ordering(531) 00:14:05.086 fused_ordering(532) 00:14:05.086 fused_ordering(533) 00:14:05.086 fused_ordering(534) 00:14:05.086 fused_ordering(535) 00:14:05.086 fused_ordering(536) 00:14:05.086 fused_ordering(537) 00:14:05.086 fused_ordering(538) 00:14:05.086 fused_ordering(539) 00:14:05.086 fused_ordering(540) 00:14:05.086 fused_ordering(541) 00:14:05.086 fused_ordering(542) 00:14:05.086 fused_ordering(543) 00:14:05.086 fused_ordering(544) 00:14:05.086 fused_ordering(545) 00:14:05.086 fused_ordering(546) 00:14:05.086 fused_ordering(547) 00:14:05.086 fused_ordering(548) 00:14:05.086 fused_ordering(549) 00:14:05.086 fused_ordering(550) 00:14:05.086 fused_ordering(551) 00:14:05.086 fused_ordering(552) 00:14:05.086 fused_ordering(553) 00:14:05.086 fused_ordering(554) 00:14:05.086 fused_ordering(555) 00:14:05.086 fused_ordering(556) 00:14:05.086 fused_ordering(557) 00:14:05.086 fused_ordering(558) 00:14:05.086 fused_ordering(559) 00:14:05.086 fused_ordering(560) 00:14:05.086 fused_ordering(561) 00:14:05.086 fused_ordering(562) 00:14:05.086 fused_ordering(563) 00:14:05.086 fused_ordering(564) 00:14:05.086 fused_ordering(565) 00:14:05.086 fused_ordering(566) 00:14:05.086 fused_ordering(567) 00:14:05.086 fused_ordering(568) 00:14:05.086 fused_ordering(569) 00:14:05.086 fused_ordering(570) 00:14:05.086 fused_ordering(571) 00:14:05.086 fused_ordering(572) 00:14:05.086 fused_ordering(573) 00:14:05.086 fused_ordering(574) 00:14:05.086 fused_ordering(575) 00:14:05.086 fused_ordering(576) 00:14:05.086 fused_ordering(577) 00:14:05.086 fused_ordering(578) 00:14:05.086 fused_ordering(579) 00:14:05.086 fused_ordering(580) 00:14:05.086 fused_ordering(581) 00:14:05.086 fused_ordering(582) 00:14:05.086 fused_ordering(583) 00:14:05.086 fused_ordering(584) 00:14:05.086 fused_ordering(585) 00:14:05.086 fused_ordering(586) 00:14:05.086 fused_ordering(587) 00:14:05.086 fused_ordering(588) 00:14:05.086 fused_ordering(589) 00:14:05.086 fused_ordering(590) 00:14:05.086 fused_ordering(591) 00:14:05.086 fused_ordering(592) 00:14:05.086 fused_ordering(593) 00:14:05.086 fused_ordering(594) 00:14:05.086 fused_ordering(595) 00:14:05.086 fused_ordering(596) 00:14:05.086 fused_ordering(597) 00:14:05.086 fused_ordering(598) 00:14:05.086 fused_ordering(599) 00:14:05.086 fused_ordering(600) 00:14:05.086 fused_ordering(601) 00:14:05.086 fused_ordering(602) 00:14:05.086 fused_ordering(603) 00:14:05.086 fused_ordering(604) 00:14:05.086 fused_ordering(605) 00:14:05.086 fused_ordering(606) 00:14:05.086 fused_ordering(607) 00:14:05.086 fused_ordering(608) 00:14:05.086 fused_ordering(609) 00:14:05.086 fused_ordering(610) 00:14:05.086 fused_ordering(611) 00:14:05.086 fused_ordering(612) 00:14:05.086 fused_ordering(613) 00:14:05.086 fused_ordering(614) 00:14:05.086 fused_ordering(615) 00:14:05.652 fused_ordering(616) 00:14:05.652 fused_ordering(617) 00:14:05.652 fused_ordering(618) 00:14:05.652 fused_ordering(619) 00:14:05.652 fused_ordering(620) 00:14:05.652 fused_ordering(621) 00:14:05.652 fused_ordering(622) 00:14:05.652 fused_ordering(623) 00:14:05.652 fused_ordering(624) 00:14:05.652 fused_ordering(625) 00:14:05.652 fused_ordering(626) 00:14:05.652 fused_ordering(627) 00:14:05.652 fused_ordering(628) 00:14:05.652 fused_ordering(629) 00:14:05.652 fused_ordering(630) 00:14:05.652 fused_ordering(631) 00:14:05.652 fused_ordering(632) 00:14:05.652 fused_ordering(633) 00:14:05.652 fused_ordering(634) 00:14:05.652 fused_ordering(635) 00:14:05.652 fused_ordering(636) 00:14:05.652 fused_ordering(637) 00:14:05.652 fused_ordering(638) 00:14:05.652 fused_ordering(639) 00:14:05.652 fused_ordering(640) 00:14:05.652 fused_ordering(641) 00:14:05.652 fused_ordering(642) 00:14:05.652 fused_ordering(643) 00:14:05.652 fused_ordering(644) 00:14:05.652 fused_ordering(645) 00:14:05.652 fused_ordering(646) 00:14:05.652 fused_ordering(647) 00:14:05.652 fused_ordering(648) 00:14:05.652 fused_ordering(649) 00:14:05.652 fused_ordering(650) 00:14:05.652 fused_ordering(651) 00:14:05.652 fused_ordering(652) 00:14:05.652 fused_ordering(653) 00:14:05.652 fused_ordering(654) 00:14:05.652 fused_ordering(655) 00:14:05.652 fused_ordering(656) 00:14:05.652 fused_ordering(657) 00:14:05.652 fused_ordering(658) 00:14:05.652 fused_ordering(659) 00:14:05.652 fused_ordering(660) 00:14:05.652 fused_ordering(661) 00:14:05.652 fused_ordering(662) 00:14:05.652 fused_ordering(663) 00:14:05.652 fused_ordering(664) 00:14:05.652 fused_ordering(665) 00:14:05.652 fused_ordering(666) 00:14:05.652 fused_ordering(667) 00:14:05.652 fused_ordering(668) 00:14:05.652 fused_ordering(669) 00:14:05.653 fused_ordering(670) 00:14:05.653 fused_ordering(671) 00:14:05.653 fused_ordering(672) 00:14:05.653 fused_ordering(673) 00:14:05.653 fused_ordering(674) 00:14:05.653 fused_ordering(675) 00:14:05.653 fused_ordering(676) 00:14:05.653 fused_ordering(677) 00:14:05.653 fused_ordering(678) 00:14:05.653 fused_ordering(679) 00:14:05.653 fused_ordering(680) 00:14:05.653 fused_ordering(681) 00:14:05.653 fused_ordering(682) 00:14:05.653 fused_ordering(683) 00:14:05.653 fused_ordering(684) 00:14:05.653 fused_ordering(685) 00:14:05.653 fused_ordering(686) 00:14:05.653 fused_ordering(687) 00:14:05.653 fused_ordering(688) 00:14:05.653 fused_ordering(689) 00:14:05.653 fused_ordering(690) 00:14:05.653 fused_ordering(691) 00:14:05.653 fused_ordering(692) 00:14:05.653 fused_ordering(693) 00:14:05.653 fused_ordering(694) 00:14:05.653 fused_ordering(695) 00:14:05.653 fused_ordering(696) 00:14:05.653 fused_ordering(697) 00:14:05.653 fused_ordering(698) 00:14:05.653 fused_ordering(699) 00:14:05.653 fused_ordering(700) 00:14:05.653 fused_ordering(701) 00:14:05.653 fused_ordering(702) 00:14:05.653 fused_ordering(703) 00:14:05.653 fused_ordering(704) 00:14:05.653 fused_ordering(705) 00:14:05.653 fused_ordering(706) 00:14:05.653 fused_ordering(707) 00:14:05.653 fused_ordering(708) 00:14:05.653 fused_ordering(709) 00:14:05.653 fused_ordering(710) 00:14:05.653 fused_ordering(711) 00:14:05.653 fused_ordering(712) 00:14:05.653 fused_ordering(713) 00:14:05.653 fused_ordering(714) 00:14:05.653 fused_ordering(715) 00:14:05.653 fused_ordering(716) 00:14:05.653 fused_ordering(717) 00:14:05.653 fused_ordering(718) 00:14:05.653 fused_ordering(719) 00:14:05.653 fused_ordering(720) 00:14:05.653 fused_ordering(721) 00:14:05.653 fused_ordering(722) 00:14:05.653 fused_ordering(723) 00:14:05.653 fused_ordering(724) 00:14:05.653 fused_ordering(725) 00:14:05.653 fused_ordering(726) 00:14:05.653 fused_ordering(727) 00:14:05.653 fused_ordering(728) 00:14:05.653 fused_ordering(729) 00:14:05.653 fused_ordering(730) 00:14:05.653 fused_ordering(731) 00:14:05.653 fused_ordering(732) 00:14:05.653 fused_ordering(733) 00:14:05.653 fused_ordering(734) 00:14:05.653 fused_ordering(735) 00:14:05.653 fused_ordering(736) 00:14:05.653 fused_ordering(737) 00:14:05.653 fused_ordering(738) 00:14:05.653 fused_ordering(739) 00:14:05.653 fused_ordering(740) 00:14:05.653 fused_ordering(741) 00:14:05.653 fused_ordering(742) 00:14:05.653 fused_ordering(743) 00:14:05.653 fused_ordering(744) 00:14:05.653 fused_ordering(745) 00:14:05.653 fused_ordering(746) 00:14:05.653 fused_ordering(747) 00:14:05.653 fused_ordering(748) 00:14:05.653 fused_ordering(749) 00:14:05.653 fused_ordering(750) 00:14:05.653 fused_ordering(751) 00:14:05.653 fused_ordering(752) 00:14:05.653 fused_ordering(753) 00:14:05.653 fused_ordering(754) 00:14:05.653 fused_ordering(755) 00:14:05.653 fused_ordering(756) 00:14:05.653 fused_ordering(757) 00:14:05.653 fused_ordering(758) 00:14:05.653 fused_ordering(759) 00:14:05.653 fused_ordering(760) 00:14:05.653 fused_ordering(761) 00:14:05.653 fused_ordering(762) 00:14:05.653 fused_ordering(763) 00:14:05.653 fused_ordering(764) 00:14:05.653 fused_ordering(765) 00:14:05.653 fused_ordering(766) 00:14:05.653 fused_ordering(767) 00:14:05.653 fused_ordering(768) 00:14:05.653 fused_ordering(769) 00:14:05.653 fused_ordering(770) 00:14:05.653 fused_ordering(771) 00:14:05.653 fused_ordering(772) 00:14:05.653 fused_ordering(773) 00:14:05.653 fused_ordering(774) 00:14:05.653 fused_ordering(775) 00:14:05.653 fused_ordering(776) 00:14:05.653 fused_ordering(777) 00:14:05.653 fused_ordering(778) 00:14:05.653 fused_ordering(779) 00:14:05.653 fused_ordering(780) 00:14:05.653 fused_ordering(781) 00:14:05.653 fused_ordering(782) 00:14:05.653 fused_ordering(783) 00:14:05.653 fused_ordering(784) 00:14:05.653 fused_ordering(785) 00:14:05.653 fused_ordering(786) 00:14:05.653 fused_ordering(787) 00:14:05.653 fused_ordering(788) 00:14:05.653 fused_ordering(789) 00:14:05.653 fused_ordering(790) 00:14:05.653 fused_ordering(791) 00:14:05.653 fused_ordering(792) 00:14:05.653 fused_ordering(793) 00:14:05.653 fused_ordering(794) 00:14:05.653 fused_ordering(795) 00:14:05.653 fused_ordering(796) 00:14:05.653 fused_ordering(797) 00:14:05.653 fused_ordering(798) 00:14:05.653 fused_ordering(799) 00:14:05.653 fused_ordering(800) 00:14:05.653 fused_ordering(801) 00:14:05.653 fused_ordering(802) 00:14:05.653 fused_ordering(803) 00:14:05.653 fused_ordering(804) 00:14:05.653 fused_ordering(805) 00:14:05.653 fused_ordering(806) 00:14:05.653 fused_ordering(807) 00:14:05.653 fused_ordering(808) 00:14:05.653 fused_ordering(809) 00:14:05.653 fused_ordering(810) 00:14:05.653 fused_ordering(811) 00:14:05.653 fused_ordering(812) 00:14:05.653 fused_ordering(813) 00:14:05.653 fused_ordering(814) 00:14:05.653 fused_ordering(815) 00:14:05.653 fused_ordering(816) 00:14:05.653 fused_ordering(817) 00:14:05.653 fused_ordering(818) 00:14:05.653 fused_ordering(819) 00:14:05.653 fused_ordering(820) 00:14:06.587 fused_ordering(821) 00:14:06.587 fused_ordering(822) 00:14:06.587 fused_ordering(823) 00:14:06.587 fused_ordering(824) 00:14:06.587 fused_ordering(825) 00:14:06.587 fused_ordering(826) 00:14:06.587 fused_ordering(827) 00:14:06.587 fused_ordering(828) 00:14:06.587 fused_ordering(829) 00:14:06.587 fused_ordering(830) 00:14:06.587 fused_ordering(831) 00:14:06.587 fused_ordering(832) 00:14:06.587 fused_ordering(833) 00:14:06.587 fused_ordering(834) 00:14:06.587 fused_ordering(835) 00:14:06.587 fused_ordering(836) 00:14:06.587 fused_ordering(837) 00:14:06.587 fused_ordering(838) 00:14:06.587 fused_ordering(839) 00:14:06.587 fused_ordering(840) 00:14:06.587 fused_ordering(841) 00:14:06.587 fused_ordering(842) 00:14:06.587 fused_ordering(843) 00:14:06.587 fused_ordering(844) 00:14:06.587 fused_ordering(845) 00:14:06.587 fused_ordering(846) 00:14:06.587 fused_ordering(847) 00:14:06.587 fused_ordering(848) 00:14:06.587 fused_ordering(849) 00:14:06.587 fused_ordering(850) 00:14:06.587 fused_ordering(851) 00:14:06.587 fused_ordering(852) 00:14:06.587 fused_ordering(853) 00:14:06.587 fused_ordering(854) 00:14:06.587 fused_ordering(855) 00:14:06.587 fused_ordering(856) 00:14:06.587 fused_ordering(857) 00:14:06.587 fused_ordering(858) 00:14:06.587 fused_ordering(859) 00:14:06.587 fused_ordering(860) 00:14:06.587 fused_ordering(861) 00:14:06.587 fused_ordering(862) 00:14:06.587 fused_ordering(863) 00:14:06.587 fused_ordering(864) 00:14:06.587 fused_ordering(865) 00:14:06.587 fused_ordering(866) 00:14:06.587 fused_ordering(867) 00:14:06.587 fused_ordering(868) 00:14:06.587 fused_ordering(869) 00:14:06.587 fused_ordering(870) 00:14:06.587 fused_ordering(871) 00:14:06.587 fused_ordering(872) 00:14:06.587 fused_ordering(873) 00:14:06.587 fused_ordering(874) 00:14:06.587 fused_ordering(875) 00:14:06.587 fused_ordering(876) 00:14:06.587 fused_ordering(877) 00:14:06.587 fused_ordering(878) 00:14:06.587 fused_ordering(879) 00:14:06.587 fused_ordering(880) 00:14:06.587 fused_ordering(881) 00:14:06.587 fused_ordering(882) 00:14:06.587 fused_ordering(883) 00:14:06.587 fused_ordering(884) 00:14:06.587 fused_ordering(885) 00:14:06.587 fused_ordering(886) 00:14:06.587 fused_ordering(887) 00:14:06.587 fused_ordering(888) 00:14:06.587 fused_ordering(889) 00:14:06.587 fused_ordering(890) 00:14:06.587 fused_ordering(891) 00:14:06.587 fused_ordering(892) 00:14:06.587 fused_ordering(893) 00:14:06.587 fused_ordering(894) 00:14:06.587 fused_ordering(895) 00:14:06.587 fused_ordering(896) 00:14:06.587 fused_ordering(897) 00:14:06.587 fused_ordering(898) 00:14:06.587 fused_ordering(899) 00:14:06.587 fused_ordering(900) 00:14:06.587 fused_ordering(901) 00:14:06.587 fused_ordering(902) 00:14:06.587 fused_ordering(903) 00:14:06.587 fused_ordering(904) 00:14:06.587 fused_ordering(905) 00:14:06.587 fused_ordering(906) 00:14:06.587 fused_ordering(907) 00:14:06.587 fused_ordering(908) 00:14:06.588 fused_ordering(909) 00:14:06.588 fused_ordering(910) 00:14:06.588 fused_ordering(911) 00:14:06.588 fused_ordering(912) 00:14:06.588 fused_ordering(913) 00:14:06.588 fused_ordering(914) 00:14:06.588 fused_ordering(915) 00:14:06.588 fused_ordering(916) 00:14:06.588 fused_ordering(917) 00:14:06.588 fused_ordering(918) 00:14:06.588 fused_ordering(919) 00:14:06.588 fused_ordering(920) 00:14:06.588 fused_ordering(921) 00:14:06.588 fused_ordering(922) 00:14:06.588 fused_ordering(923) 00:14:06.588 fused_ordering(924) 00:14:06.588 fused_ordering(925) 00:14:06.588 fused_ordering(926) 00:14:06.588 fused_ordering(927) 00:14:06.588 fused_ordering(928) 00:14:06.588 fused_ordering(929) 00:14:06.588 fused_ordering(930) 00:14:06.588 fused_ordering(931) 00:14:06.588 fused_ordering(932) 00:14:06.588 fused_ordering(933) 00:14:06.588 fused_ordering(934) 00:14:06.588 fused_ordering(935) 00:14:06.588 fused_ordering(936) 00:14:06.588 fused_ordering(937) 00:14:06.588 fused_ordering(938) 00:14:06.588 fused_ordering(939) 00:14:06.588 fused_ordering(940) 00:14:06.588 fused_ordering(941) 00:14:06.588 fused_ordering(942) 00:14:06.588 fused_ordering(943) 00:14:06.588 fused_ordering(944) 00:14:06.588 fused_ordering(945) 00:14:06.588 fused_ordering(946) 00:14:06.588 fused_ordering(947) 00:14:06.588 fused_ordering(948) 00:14:06.588 fused_ordering(949) 00:14:06.588 fused_ordering(950) 00:14:06.588 fused_ordering(951) 00:14:06.588 fused_ordering(952) 00:14:06.588 fused_ordering(953) 00:14:06.588 fused_ordering(954) 00:14:06.588 fused_ordering(955) 00:14:06.588 fused_ordering(956) 00:14:06.588 fused_ordering(957) 00:14:06.588 fused_ordering(958) 00:14:06.588 fused_ordering(959) 00:14:06.588 fused_ordering(960) 00:14:06.588 fused_ordering(961) 00:14:06.588 fused_ordering(962) 00:14:06.588 fused_ordering(963) 00:14:06.588 fused_ordering(964) 00:14:06.588 fused_ordering(965) 00:14:06.588 fused_ordering(966) 00:14:06.588 fused_ordering(967) 00:14:06.588 fused_ordering(968) 00:14:06.588 fused_ordering(969) 00:14:06.588 fused_ordering(970) 00:14:06.588 fused_ordering(971) 00:14:06.588 fused_ordering(972) 00:14:06.588 fused_ordering(973) 00:14:06.588 fused_ordering(974) 00:14:06.588 fused_ordering(975) 00:14:06.588 fused_ordering(976) 00:14:06.588 fused_ordering(977) 00:14:06.588 fused_ordering(978) 00:14:06.588 fused_ordering(979) 00:14:06.588 fused_ordering(980) 00:14:06.588 fused_ordering(981) 00:14:06.588 fused_ordering(982) 00:14:06.588 fused_ordering(983) 00:14:06.588 fused_ordering(984) 00:14:06.588 fused_ordering(985) 00:14:06.588 fused_ordering(986) 00:14:06.588 fused_ordering(987) 00:14:06.588 fused_ordering(988) 00:14:06.588 fused_ordering(989) 00:14:06.588 fused_ordering(990) 00:14:06.588 fused_ordering(991) 00:14:06.588 fused_ordering(992) 00:14:06.588 fused_ordering(993) 00:14:06.588 fused_ordering(994) 00:14:06.588 fused_ordering(995) 00:14:06.588 fused_ordering(996) 00:14:06.588 fused_ordering(997) 00:14:06.588 fused_ordering(998) 00:14:06.588 fused_ordering(999) 00:14:06.588 fused_ordering(1000) 00:14:06.588 fused_ordering(1001) 00:14:06.588 fused_ordering(1002) 00:14:06.588 fused_ordering(1003) 00:14:06.588 fused_ordering(1004) 00:14:06.588 fused_ordering(1005) 00:14:06.588 fused_ordering(1006) 00:14:06.588 fused_ordering(1007) 00:14:06.588 fused_ordering(1008) 00:14:06.588 fused_ordering(1009) 00:14:06.588 fused_ordering(1010) 00:14:06.588 fused_ordering(1011) 00:14:06.588 fused_ordering(1012) 00:14:06.588 fused_ordering(1013) 00:14:06.588 fused_ordering(1014) 00:14:06.588 fused_ordering(1015) 00:14:06.588 fused_ordering(1016) 00:14:06.588 fused_ordering(1017) 00:14:06.588 fused_ordering(1018) 00:14:06.588 fused_ordering(1019) 00:14:06.588 fused_ordering(1020) 00:14:06.588 fused_ordering(1021) 00:14:06.588 fused_ordering(1022) 00:14:06.588 fused_ordering(1023) 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.588 rmmod nvme_tcp 00:14:06.588 rmmod nvme_fabrics 00:14:06.588 rmmod nvme_keyring 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 737570 ']' 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 737570 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 737570 ']' 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 737570 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 737570 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 737570' 00:14:06.588 killing process with pid 737570 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 737570 00:14:06.588 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 737570 00:14:06.846 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.846 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.846 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.846 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.846 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.847 23:54:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.847 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.847 23:54:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.377 23:54:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.377 00:14:09.377 real 0m8.090s 00:14:09.377 user 0m5.675s 00:14:09.377 sys 0m3.876s 00:14:09.377 23:54:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.377 23:54:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.377 ************************************ 00:14:09.377 END TEST nvmf_fused_ordering 00:14:09.377 ************************************ 00:14:09.377 23:54:14 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:09.377 23:54:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.377 23:54:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.377 23:54:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.377 ************************************ 00:14:09.377 START TEST nvmf_delete_subsystem 00:14:09.377 ************************************ 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:09.377 * Looking for test storage... 00:14:09.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.377 23:54:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:11.280 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:11.280 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:11.280 Found net devices under 0000:09:00.0: cvl_0_0 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.280 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:11.281 Found net devices under 0000:09:00.1: cvl_0_1 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:14:11.281 00:14:11.281 --- 10.0.0.2 ping statistics --- 00:14:11.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.281 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:14:11.281 00:14:11.281 --- 10.0.0.1 ping statistics --- 00:14:11.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.281 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=740071 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 740071 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 740071 ']' 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.281 23:54:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.281 [2024-07-24 23:54:16.802838] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:11.281 [2024-07-24 23:54:16.802918] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.281 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.281 [2024-07-24 23:54:16.873285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:11.281 [2024-07-24 23:54:16.968919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.281 [2024-07-24 23:54:16.968980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.281 [2024-07-24 23:54:16.968995] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.281 [2024-07-24 23:54:16.969009] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.281 [2024-07-24 23:54:16.969021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.281 [2024-07-24 23:54:16.969116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.281 [2024-07-24 23:54:16.969122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.539 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.539 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 [2024-07-24 23:54:17.125153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 [2024-07-24 23:54:17.141456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 NULL1 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 Delay0 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=740207 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:11.540 23:54:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:11.540 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.540 [2024-07-24 23:54:17.216117] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:14.068 23:54:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.068 23:54:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.068 23:54:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 [2024-07-24 23:54:19.347028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa70c000c00 is same with the state(5) to be set 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Read completed with error (sct=0, sc=8) 00:14:14.068 Write completed with error (sct=0, sc=8) 00:14:14.068 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 starting I/O failed: -6 00:14:14.069 [2024-07-24 23:54:19.347953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950180 is same with the state(5) to be set 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.069 Write completed with error (sct=0, sc=8) 00:14:14.069 Read completed with error (sct=0, sc=8) 00:14:14.636 [2024-07-24 23:54:20.316337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19538b0 is same with the state(5) to be set 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 [2024-07-24 23:54:20.350001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950360 is same with the state(5) to be set 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 [2024-07-24 23:54:20.350210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950aa0 is same with the state(5) to be set 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 [2024-07-24 23:54:20.350968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa70c00c2f0 is same with the state(5) to be set 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Write completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 Read completed with error (sct=0, sc=8) 00:14:14.636 [2024-07-24 23:54:20.351848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa70c00bfe0 is same with the state(5) to be set 00:14:14.636 Initializing NVMe Controllers 00:14:14.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.637 Controller IO queue size 128, less than required. 00:14:14.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:14.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:14.637 Initialization complete. Launching workers. 00:14:14.637 ======================================================== 00:14:14.637 Latency(us) 00:14:14.637 Device Information : IOPS MiB/s Average min max 00:14:14.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.17 0.08 897362.77 383.83 1014322.77 00:14:14.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.26 0.08 927363.11 528.59 1014125.24 00:14:14.637 ======================================================== 00:14:14.637 Total : 326.43 0.16 911815.82 383.83 1014322.77 00:14:14.637 00:14:14.895 [2024-07-24 23:54:20.352404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19538b0 (9): Bad file descriptor 00:14:14.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:14.895 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.895 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:14.895 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 740207 00:14:14.895 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 740207 00:14:15.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (740207) - No such process 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 740207 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 740207 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 740207 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.153 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.411 [2024-07-24 23:54:20.874160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=740614 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:15.411 23:54:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:15.411 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.411 [2024-07-24 23:54:20.927242] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:15.976 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:15.976 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:15.976 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:16.233 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:16.233 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:16.233 23:54:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:16.797 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:16.797 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:16.797 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.361 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:17.361 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:17.361 23:54:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.925 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:17.925 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:17.925 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.489 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.489 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:18.489 23:54:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.489 Initializing NVMe Controllers 00:14:18.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.489 Controller IO queue size 128, less than required. 00:14:18.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:18.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:18.489 Initialization complete. Launching workers. 00:14:18.489 ======================================================== 00:14:18.489 Latency(us) 00:14:18.489 Device Information : IOPS MiB/s Average min max 00:14:18.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005043.60 1000243.11 1042433.15 00:14:18.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004307.54 1000213.02 1012903.07 00:14:18.489 ======================================================== 00:14:18.489 Total : 256.00 0.12 1004675.57 1000213.02 1042433.15 00:14:18.489 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 740614 00:14:18.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (740614) - No such process 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 740614 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.746 rmmod nvme_tcp 00:14:18.746 rmmod nvme_fabrics 00:14:18.746 rmmod nvme_keyring 00:14:18.746 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 740071 ']' 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 740071 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 740071 ']' 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 740071 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 740071 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 740071' 00:14:19.003 killing process with pid 740071 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 740071 00:14:19.003 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 740071 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.261 23:54:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.161 23:54:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.161 00:14:21.161 real 0m12.236s 00:14:21.161 user 0m27.743s 00:14:21.161 sys 0m2.993s 00:14:21.161 23:54:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.161 23:54:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.161 ************************************ 00:14:21.161 END TEST nvmf_delete_subsystem 00:14:21.161 ************************************ 00:14:21.161 23:54:26 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:21.161 23:54:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:21.161 23:54:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.161 23:54:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.161 ************************************ 00:14:21.161 START TEST nvmf_ns_masking 00:14:21.161 ************************************ 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:21.161 * Looking for test storage... 00:14:21.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.161 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:21.419 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3e6117e5-8697-4f05-b512-28a8358011d2 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.420 23:54:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.317 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:23.318 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:23.318 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:23.318 Found net devices under 0000:09:00.0: cvl_0_0 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:23.318 Found net devices under 0000:09:00.1: cvl_0_1 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.318 23:54:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.318 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.318 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.318 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:14:23.576 00:14:23.576 --- 10.0.0.2 ping statistics --- 00:14:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.576 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:14:23.576 00:14:23.576 --- 10.0.0.1 ping statistics --- 00:14:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.576 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.576 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=742954 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 742954 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 742954 ']' 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.577 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.577 [2024-07-24 23:54:29.116413] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:23.577 [2024-07-24 23:54:29.116496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.577 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.577 [2024-07-24 23:54:29.179532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.577 [2024-07-24 23:54:29.272697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.577 [2024-07-24 23:54:29.272746] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.577 [2024-07-24 23:54:29.272775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.577 [2024-07-24 23:54:29.272786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.577 [2024-07-24 23:54:29.272795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.577 [2024-07-24 23:54:29.272888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.577 [2024-07-24 23:54:29.272946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.577 [2024-07-24 23:54:29.273013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.577 [2024-07-24 23:54:29.273015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.836 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.094 [2024-07-24 23:54:29.634453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.094 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:24.094 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:24.094 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:24.352 Malloc1 00:14:24.352 23:54:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:24.610 Malloc2 00:14:24.610 23:54:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:24.868 23:54:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:25.131 23:54:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.434 [2024-07-24 23:54:30.922609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.434 23:54:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:25.434 23:54:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e6117e5-8697-4f05-b512-28a8358011d2 -a 10.0.0.2 -s 4420 -i 4 00:14:25.696 23:54:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.696 23:54:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:25.696 23:54:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.696 23:54:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:25.696 23:54:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:27.595 [ 0]:0x1 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e794c0da778342f9b9ed560069dcfba4 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e794c0da778342f9b9ed560069dcfba4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.595 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:28.158 [ 0]:0x1 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e794c0da778342f9b9ed560069dcfba4 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e794c0da778342f9b9ed560069dcfba4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:28.158 [ 1]:0x2 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.158 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.415 23:54:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:28.673 23:54:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:28.673 23:54:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e6117e5-8697-4f05-b512-28a8358011d2 -a 10.0.0.2 -s 4420 -i 4 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:28.931 23:54:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:30.826 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:31.083 [ 0]:0x2 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.083 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:31.340 [ 0]:0x1 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e794c0da778342f9b9ed560069dcfba4 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e794c0da778342f9b9ed560069dcfba4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:31.340 [ 1]:0x2 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.340 23:54:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.340 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:31.340 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.340 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.598 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:31.855 [ 0]:0x2 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.855 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e6117e5-8697-4f05-b512-28a8358011d2 -a 10.0.0.2 -s 4420 -i 4 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:32.112 23:54:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.636 [ 0]:0x1 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e794c0da778342f9b9ed560069dcfba4 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e794c0da778342f9b9ed560069dcfba4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.636 23:54:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.636 [ 1]:0x2 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:34.636 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.637 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.894 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:34.894 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:34.894 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.894 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:34.894 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.895 [ 0]:0x2 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:34.895 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:35.153 [2024-07-24 23:54:40.725573] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:35.153 request: 00:14:35.153 { 00:14:35.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.153 "nsid": 2, 00:14:35.153 "host": "nqn.2016-06.io.spdk:host1", 00:14:35.153 "method": "nvmf_ns_remove_host", 00:14:35.153 "req_id": 1 00:14:35.153 } 00:14:35.153 Got JSON-RPC error response 00:14:35.153 response: 00:14:35.153 { 00:14:35.153 "code": -32602, 00:14:35.153 "message": "Invalid parameters" 00:14:35.153 } 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:35.153 [ 0]:0x2 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.153 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:35.411 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31cfb0311ad74e8e984656e29edc7063 00:14:35.411 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31cfb0311ad74e8e984656e29edc7063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.411 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:35.411 23:54:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.411 23:54:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.669 rmmod nvme_tcp 00:14:35.669 rmmod nvme_fabrics 00:14:35.669 rmmod nvme_keyring 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 742954 ']' 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 742954 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 742954 ']' 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 742954 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 742954 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 742954' 00:14:35.669 killing process with pid 742954 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 742954 00:14:35.669 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 742954 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.927 23:54:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.461 23:54:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.461 00:14:38.461 real 0m16.855s 00:14:38.461 user 0m52.597s 00:14:38.461 sys 0m3.845s 00:14:38.461 23:54:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:38.461 23:54:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.461 ************************************ 00:14:38.461 END TEST nvmf_ns_masking 00:14:38.461 ************************************ 00:14:38.461 23:54:43 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:38.461 23:54:43 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.461 23:54:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:38.462 23:54:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:38.462 23:54:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.462 ************************************ 00:14:38.462 START TEST nvmf_nvme_cli 00:14:38.462 ************************************ 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.462 * Looking for test storage... 00:14:38.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.462 23:54:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.364 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:40.365 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:40.365 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:40.365 Found net devices under 0000:09:00.0: cvl_0_0 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:40.365 Found net devices under 0000:09:00.1: cvl_0_1 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:14:40.365 00:14:40.365 --- 10.0.0.2 ping statistics --- 00:14:40.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.365 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:14:40.365 00:14:40.365 --- 10.0.0.1 ping statistics --- 00:14:40.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.365 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=746503 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 746503 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 746503 ']' 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.365 23:54:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.365 [2024-07-24 23:54:45.926157] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:40.365 [2024-07-24 23:54:45.926243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.365 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.365 [2024-07-24 23:54:45.998943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.624 [2024-07-24 23:54:46.097660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.624 [2024-07-24 23:54:46.097715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.624 [2024-07-24 23:54:46.097731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.624 [2024-07-24 23:54:46.097745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.624 [2024-07-24 23:54:46.097757] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.624 [2024-07-24 23:54:46.097847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.624 [2024-07-24 23:54:46.097920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.624 [2024-07-24 23:54:46.097941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.624 [2024-07-24 23:54:46.097944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 [2024-07-24 23:54:46.250753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 Malloc0 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 Malloc1 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.624 [2024-07-24 23:54:46.334304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.624 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:40.881 00:14:40.881 Discovery Log Number of Records 2, Generation counter 2 00:14:40.881 =====Discovery Log Entry 0====== 00:14:40.881 trtype: tcp 00:14:40.881 adrfam: ipv4 00:14:40.881 subtype: current discovery subsystem 00:14:40.881 treq: not required 00:14:40.881 portid: 0 00:14:40.881 trsvcid: 4420 00:14:40.881 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:40.881 traddr: 10.0.0.2 00:14:40.881 eflags: explicit discovery connections, duplicate discovery information 00:14:40.881 sectype: none 00:14:40.881 =====Discovery Log Entry 1====== 00:14:40.881 trtype: tcp 00:14:40.881 adrfam: ipv4 00:14:40.881 subtype: nvme subsystem 00:14:40.881 treq: not required 00:14:40.881 portid: 0 00:14:40.881 trsvcid: 4420 00:14:40.881 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:40.881 traddr: 10.0.0.2 00:14:40.881 eflags: none 00:14:40.881 sectype: none 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.881 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:40.882 23:54:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:41.447 23:54:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:43.972 /dev/nvme0n1 ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.972 rmmod nvme_tcp 00:14:43.972 rmmod nvme_fabrics 00:14:43.972 rmmod nvme_keyring 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 746503 ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 746503 ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 746503' 00:14:43.972 killing process with pid 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 746503 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.972 23:54:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.505 23:54:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.505 00:14:46.505 real 0m7.902s 00:14:46.505 user 0m14.359s 00:14:46.505 sys 0m2.091s 00:14:46.505 23:54:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.505 23:54:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.505 ************************************ 00:14:46.505 END TEST nvmf_nvme_cli 00:14:46.505 ************************************ 00:14:46.505 23:54:51 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:46.505 23:54:51 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.505 23:54:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:46.505 23:54:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:46.505 23:54:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.505 ************************************ 00:14:46.505 START TEST nvmf_vfio_user 00:14:46.505 ************************************ 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.505 * Looking for test storage... 00:14:46.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.505 23:54:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=747303 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 747303' 00:14:46.506 Process pid: 747303 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 747303 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 747303 ']' 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.506 23:54:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:46.506 [2024-07-24 23:54:51.792299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:46.506 [2024-07-24 23:54:51.792373] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.506 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.506 [2024-07-24 23:54:51.849705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.506 [2024-07-24 23:54:51.940713] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.506 [2024-07-24 23:54:51.940761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.506 [2024-07-24 23:54:51.940785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.506 [2024-07-24 23:54:51.940796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.506 [2024-07-24 23:54:51.940805] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.506 [2024-07-24 23:54:51.940900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.506 [2024-07-24 23:54:51.940968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.506 [2024-07-24 23:54:51.941014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.506 [2024-07-24 23:54:51.941017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.506 23:54:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.506 23:54:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:46.506 23:54:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:47.468 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:47.725 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:47.725 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:47.725 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.725 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:47.725 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.983 Malloc1 00:14:47.983 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:48.240 23:54:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:48.497 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:48.753 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.753 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:48.753 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.010 Malloc2 00:14:49.010 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.267 23:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:49.524 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.783 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:49.783 [2024-07-24 23:54:55.418014] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:49.783 [2024-07-24 23:54:55.418057] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747844 ] 00:14:49.783 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.783 [2024-07-24 23:54:55.451436] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:49.783 [2024-07-24 23:54:55.453938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.783 [2024-07-24 23:54:55.453968] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe5e7dbe000 00:14:49.783 [2024-07-24 23:54:55.454933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.455930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.456934] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.457940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.458945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.459948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.460953] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.461961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.783 [2024-07-24 23:54:55.462967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.783 [2024-07-24 23:54:55.462987] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe5e6b74000 00:14:49.783 [2024-07-24 23:54:55.464157] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.783 [2024-07-24 23:54:55.478944] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:49.783 [2024-07-24 23:54:55.478982] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:49.783 [2024-07-24 23:54:55.484085] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.783 [2024-07-24 23:54:55.484160] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:49.783 [2024-07-24 23:54:55.484258] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:49.783 [2024-07-24 23:54:55.484291] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:49.783 [2024-07-24 23:54:55.484303] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:49.783 [2024-07-24 23:54:55.486115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:49.784 [2024-07-24 23:54:55.486140] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:49.784 [2024-07-24 23:54:55.486154] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:49.784 [2024-07-24 23:54:55.487091] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.784 [2024-07-24 23:54:55.487133] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:49.784 [2024-07-24 23:54:55.487155] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.488100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:49.784 [2024-07-24 23:54:55.488138] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.489120] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:49.784 [2024-07-24 23:54:55.489138] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:49.784 [2024-07-24 23:54:55.489147] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.489159] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.489269] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:49.784 [2024-07-24 23:54:55.489278] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.489287] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:49.784 [2024-07-24 23:54:55.490138] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:49.784 [2024-07-24 23:54:55.491137] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:49.784 [2024-07-24 23:54:55.492142] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.784 [2024-07-24 23:54:55.493146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.784 [2024-07-24 23:54:55.493254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:49.784 [2024-07-24 23:54:55.494159] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:49.784 [2024-07-24 23:54:55.494177] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:49.784 [2024-07-24 23:54:55.494186] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494211] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:49.784 [2024-07-24 23:54:55.494225] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494256] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.784 [2024-07-24 23:54:55.494266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.784 [2024-07-24 23:54:55.494288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.784 [2024-07-24 23:54:55.494351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:49.784 [2024-07-24 23:54:55.494388] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:49.784 [2024-07-24 23:54:55.494398] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:49.784 [2024-07-24 23:54:55.494406] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:49.784 [2024-07-24 23:54:55.494414] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:49.784 [2024-07-24 23:54:55.494422] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:49.784 [2024-07-24 23:54:55.494430] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:49.784 [2024-07-24 23:54:55.494438] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494451] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:49.784 [2024-07-24 23:54:55.494482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:49.784 [2024-07-24 23:54:55.494501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.784 [2024-07-24 23:54:55.494514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.784 [2024-07-24 23:54:55.494527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.784 [2024-07-24 23:54:55.494539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.784 [2024-07-24 23:54:55.494547] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:49.784 [2024-07-24 23:54:55.494590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:49.784 [2024-07-24 23:54:55.494601] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:49.784 [2024-07-24 23:54:55.494610] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494621] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494635] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.784 [2024-07-24 23:54:55.494666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:49.784 [2024-07-24 23:54:55.494734] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:49.784 [2024-07-24 23:54:55.494755] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.494769] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:49.785 [2024-07-24 23:54:55.494778] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:49.785 [2024-07-24 23:54:55.494788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.494802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.494819] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:49.785 [2024-07-24 23:54:55.494840] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.494854] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.494866] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.785 [2024-07-24 23:54:55.494875] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.785 [2024-07-24 23:54:55.494885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.494906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.494929] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.494943] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.494956] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.785 [2024-07-24 23:54:55.494964] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.785 [2024-07-24 23:54:55.494974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.494988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495002] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495013] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495028] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495039] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495048] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495071] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:49.785 [2024-07-24 23:54:55.495079] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:49.785 [2024-07-24 23:54:55.495095] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:49.785 [2024-07-24 23:54:55.495152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495279] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:49.785 [2024-07-24 23:54:55.495288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:49.785 [2024-07-24 23:54:55.495295] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:49.785 [2024-07-24 23:54:55.495301] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:49.785 [2024-07-24 23:54:55.495311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:49.785 [2024-07-24 23:54:55.495322] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:49.785 [2024-07-24 23:54:55.495331] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:49.785 [2024-07-24 23:54:55.495340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495351] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:49.785 [2024-07-24 23:54:55.495359] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.785 [2024-07-24 23:54:55.495368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495380] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:49.785 [2024-07-24 23:54:55.495389] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:49.785 [2024-07-24 23:54:55.495398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:49.785 [2024-07-24 23:54:55.495409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:49.785 [2024-07-24 23:54:55.495475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:49.785 ===================================================== 00:14:49.786 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.786 ===================================================== 00:14:49.786 Controller Capabilities/Features 00:14:49.786 ================================ 00:14:49.786 Vendor ID: 4e58 00:14:49.786 Subsystem Vendor ID: 4e58 00:14:49.786 Serial Number: SPDK1 00:14:49.786 Model Number: SPDK bdev Controller 00:14:49.786 Firmware Version: 24.05.1 00:14:49.786 Recommended Arb Burst: 6 00:14:49.786 IEEE OUI Identifier: 8d 6b 50 00:14:49.786 Multi-path I/O 00:14:49.786 May have multiple subsystem ports: Yes 00:14:49.786 May have multiple controllers: Yes 00:14:49.786 Associated with SR-IOV VF: No 00:14:49.786 Max Data Transfer Size: 131072 00:14:49.786 Max Number of Namespaces: 32 00:14:49.786 Max Number of I/O Queues: 127 00:14:49.786 NVMe Specification Version (VS): 1.3 00:14:49.786 NVMe Specification Version (Identify): 1.3 00:14:49.786 Maximum Queue Entries: 256 00:14:49.786 Contiguous Queues Required: Yes 00:14:49.786 Arbitration Mechanisms Supported 00:14:49.786 Weighted Round Robin: Not Supported 00:14:49.786 Vendor Specific: Not Supported 00:14:49.786 Reset Timeout: 15000 ms 00:14:49.786 Doorbell Stride: 4 bytes 00:14:49.786 NVM Subsystem Reset: Not Supported 00:14:49.786 Command Sets Supported 00:14:49.786 NVM Command Set: Supported 00:14:49.786 Boot Partition: Not Supported 00:14:49.786 Memory Page Size Minimum: 4096 bytes 00:14:49.786 Memory Page Size Maximum: 4096 bytes 00:14:49.786 Persistent Memory Region: Not Supported 00:14:49.786 Optional Asynchronous Events Supported 00:14:49.786 Namespace Attribute Notices: Supported 00:14:49.786 Firmware Activation Notices: Not Supported 00:14:49.786 ANA Change Notices: Not Supported 00:14:49.786 PLE Aggregate Log Change Notices: Not Supported 00:14:49.786 LBA Status Info Alert Notices: Not Supported 00:14:49.786 EGE Aggregate Log Change Notices: Not Supported 00:14:49.786 Normal NVM Subsystem Shutdown event: Not Supported 00:14:49.786 Zone Descriptor Change Notices: Not Supported 00:14:49.786 Discovery Log Change Notices: Not Supported 00:14:49.786 Controller Attributes 00:14:49.786 128-bit Host Identifier: Supported 00:14:49.786 Non-Operational Permissive Mode: Not Supported 00:14:49.786 NVM Sets: Not Supported 00:14:49.786 Read Recovery Levels: Not Supported 00:14:49.786 Endurance Groups: Not Supported 00:14:49.786 Predictable Latency Mode: Not Supported 00:14:49.786 Traffic Based Keep ALive: Not Supported 00:14:49.786 Namespace Granularity: Not Supported 00:14:49.786 SQ Associations: Not Supported 00:14:49.786 UUID List: Not Supported 00:14:49.786 Multi-Domain Subsystem: Not Supported 00:14:49.786 Fixed Capacity Management: Not Supported 00:14:49.786 Variable Capacity Management: Not Supported 00:14:49.786 Delete Endurance Group: Not Supported 00:14:49.786 Delete NVM Set: Not Supported 00:14:49.786 Extended LBA Formats Supported: Not Supported 00:14:49.786 Flexible Data Placement Supported: Not Supported 00:14:49.786 00:14:49.786 Controller Memory Buffer Support 00:14:49.786 ================================ 00:14:49.786 Supported: No 00:14:49.786 00:14:49.786 Persistent Memory Region Support 00:14:49.786 ================================ 00:14:49.786 Supported: No 00:14:49.786 00:14:49.786 Admin Command Set Attributes 00:14:49.786 ============================ 00:14:49.786 Security Send/Receive: Not Supported 00:14:49.786 Format NVM: Not Supported 00:14:49.786 Firmware Activate/Download: Not Supported 00:14:49.786 Namespace Management: Not Supported 00:14:49.786 Device Self-Test: Not Supported 00:14:49.786 Directives: Not Supported 00:14:49.786 NVMe-MI: Not Supported 00:14:49.786 Virtualization Management: Not Supported 00:14:49.786 Doorbell Buffer Config: Not Supported 00:14:49.786 Get LBA Status Capability: Not Supported 00:14:49.786 Command & Feature Lockdown Capability: Not Supported 00:14:49.786 Abort Command Limit: 4 00:14:49.786 Async Event Request Limit: 4 00:14:49.786 Number of Firmware Slots: N/A 00:14:49.786 Firmware Slot 1 Read-Only: N/A 00:14:49.786 Firmware Activation Without Reset: N/A 00:14:49.786 Multiple Update Detection Support: N/A 00:14:49.786 Firmware Update Granularity: No Information Provided 00:14:49.786 Per-Namespace SMART Log: No 00:14:49.786 Asymmetric Namespace Access Log Page: Not Supported 00:14:49.786 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:49.786 Command Effects Log Page: Supported 00:14:49.786 Get Log Page Extended Data: Supported 00:14:49.786 Telemetry Log Pages: Not Supported 00:14:49.786 Persistent Event Log Pages: Not Supported 00:14:49.786 Supported Log Pages Log Page: May Support 00:14:49.786 Commands Supported & Effects Log Page: Not Supported 00:14:49.786 Feature Identifiers & Effects Log Page:May Support 00:14:49.786 NVMe-MI Commands & Effects Log Page: May Support 00:14:49.786 Data Area 4 for Telemetry Log: Not Supported 00:14:49.786 Error Log Page Entries Supported: 128 00:14:49.786 Keep Alive: Supported 00:14:49.786 Keep Alive Granularity: 10000 ms 00:14:49.786 00:14:49.786 NVM Command Set Attributes 00:14:49.786 ========================== 00:14:49.786 Submission Queue Entry Size 00:14:49.786 Max: 64 00:14:49.786 Min: 64 00:14:49.786 Completion Queue Entry Size 00:14:49.786 Max: 16 00:14:49.786 Min: 16 00:14:49.786 Number of Namespaces: 32 00:14:49.786 Compare Command: Supported 00:14:49.786 Write Uncorrectable Command: Not Supported 00:14:49.786 Dataset Management Command: Supported 00:14:49.786 Write Zeroes Command: Supported 00:14:49.786 Set Features Save Field: Not Supported 00:14:49.786 Reservations: Not Supported 00:14:49.786 Timestamp: Not Supported 00:14:49.786 Copy: Supported 00:14:49.786 Volatile Write Cache: Present 00:14:49.786 Atomic Write Unit (Normal): 1 00:14:49.786 Atomic Write Unit (PFail): 1 00:14:49.786 Atomic Compare & Write Unit: 1 00:14:49.786 Fused Compare & Write: Supported 00:14:49.786 Scatter-Gather List 00:14:49.786 SGL Command Set: Supported (Dword aligned) 00:14:49.786 SGL Keyed: Not Supported 00:14:49.786 SGL Bit Bucket Descriptor: Not Supported 00:14:49.786 SGL Metadata Pointer: Not Supported 00:14:49.786 Oversized SGL: Not Supported 00:14:49.786 SGL Metadata Address: Not Supported 00:14:49.786 SGL Offset: Not Supported 00:14:49.786 Transport SGL Data Block: Not Supported 00:14:49.786 Replay Protected Memory Block: Not Supported 00:14:49.786 00:14:49.786 Firmware Slot Information 00:14:49.786 ========================= 00:14:49.786 Active slot: 1 00:14:49.786 Slot 1 Firmware Revision: 24.05.1 00:14:49.786 00:14:49.786 00:14:49.787 Commands Supported and Effects 00:14:49.787 ============================== 00:14:49.787 Admin Commands 00:14:49.787 -------------- 00:14:49.787 Get Log Page (02h): Supported 00:14:49.787 Identify (06h): Supported 00:14:49.787 Abort (08h): Supported 00:14:49.787 Set Features (09h): Supported 00:14:49.787 Get Features (0Ah): Supported 00:14:49.787 Asynchronous Event Request (0Ch): Supported 00:14:49.787 Keep Alive (18h): Supported 00:14:49.787 I/O Commands 00:14:49.787 ------------ 00:14:49.787 Flush (00h): Supported LBA-Change 00:14:49.787 Write (01h): Supported LBA-Change 00:14:49.787 Read (02h): Supported 00:14:49.787 Compare (05h): Supported 00:14:49.787 Write Zeroes (08h): Supported LBA-Change 00:14:49.787 Dataset Management (09h): Supported LBA-Change 00:14:49.787 Copy (19h): Supported LBA-Change 00:14:49.787 Unknown (79h): Supported LBA-Change 00:14:49.787 Unknown (7Ah): Supported 00:14:49.787 00:14:49.787 Error Log 00:14:49.787 ========= 00:14:49.787 00:14:49.787 Arbitration 00:14:49.787 =========== 00:14:49.787 Arbitration Burst: 1 00:14:49.787 00:14:49.787 Power Management 00:14:49.787 ================ 00:14:49.787 Number of Power States: 1 00:14:49.787 Current Power State: Power State #0 00:14:49.787 Power State #0: 00:14:49.787 Max Power: 0.00 W 00:14:49.787 Non-Operational State: Operational 00:14:49.787 Entry Latency: Not Reported 00:14:49.787 Exit Latency: Not Reported 00:14:49.787 Relative Read Throughput: 0 00:14:49.787 Relative Read Latency: 0 00:14:49.787 Relative Write Throughput: 0 00:14:49.787 Relative Write Latency: 0 00:14:49.787 Idle Power: Not Reported 00:14:49.787 Active Power: Not Reported 00:14:49.787 Non-Operational Permissive Mode: Not Supported 00:14:49.787 00:14:49.787 Health Information 00:14:49.787 ================== 00:14:49.787 Critical Warnings: 00:14:49.787 Available Spare Space: OK 00:14:49.787 Temperature: OK 00:14:49.787 Device Reliability: OK 00:14:49.787 Read Only: No 00:14:49.787 Volatile Memory Backup: OK 00:14:49.787 Current Temperature: 0 Kelvin[2024-07-24 23:54:55.495610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:49.787 [2024-07-24 23:54:55.495687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:49.787 [2024-07-24 23:54:55.495730] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:49.787 [2024-07-24 23:54:55.495747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.787 [2024-07-24 23:54:55.495758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.787 [2024-07-24 23:54:55.495768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.787 [2024-07-24 23:54:55.495777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.787 [2024-07-24 23:54:55.498114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.787 [2024-07-24 23:54:55.498138] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:49.787 [2024-07-24 23:54:55.498188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.045 [2024-07-24 23:54:55.498262] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:50.045 [2024-07-24 23:54:55.498275] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:50.045 [2024-07-24 23:54:55.499200] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:50.045 [2024-07-24 23:54:55.499223] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:50.045 [2024-07-24 23:54:55.499280] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:50.045 [2024-07-24 23:54:55.503129] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.045 (-273 Celsius) 00:14:50.045 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.045 Available Spare: 0% 00:14:50.045 Available Spare Threshold: 0% 00:14:50.045 Life Percentage Used: 0% 00:14:50.045 Data Units Read: 0 00:14:50.045 Data Units Written: 0 00:14:50.045 Host Read Commands: 0 00:14:50.045 Host Write Commands: 0 00:14:50.045 Controller Busy Time: 0 minutes 00:14:50.045 Power Cycles: 0 00:14:50.045 Power On Hours: 0 hours 00:14:50.045 Unsafe Shutdowns: 0 00:14:50.045 Unrecoverable Media Errors: 0 00:14:50.045 Lifetime Error Log Entries: 0 00:14:50.045 Warning Temperature Time: 0 minutes 00:14:50.045 Critical Temperature Time: 0 minutes 00:14:50.045 00:14:50.045 Number of Queues 00:14:50.045 ================ 00:14:50.045 Number of I/O Submission Queues: 127 00:14:50.045 Number of I/O Completion Queues: 127 00:14:50.045 00:14:50.045 Active Namespaces 00:14:50.045 ================= 00:14:50.045 Namespace ID:1 00:14:50.045 Error Recovery Timeout: Unlimited 00:14:50.045 Command Set Identifier: NVM (00h) 00:14:50.045 Deallocate: Supported 00:14:50.045 Deallocated/Unwritten Error: Not Supported 00:14:50.045 Deallocated Read Value: Unknown 00:14:50.045 Deallocate in Write Zeroes: Not Supported 00:14:50.045 Deallocated Guard Field: 0xFFFF 00:14:50.045 Flush: Supported 00:14:50.045 Reservation: Supported 00:14:50.045 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.045 Size (in LBAs): 131072 (0GiB) 00:14:50.045 Capacity (in LBAs): 131072 (0GiB) 00:14:50.045 Utilization (in LBAs): 131072 (0GiB) 00:14:50.045 NGUID: CA356D1F43534E59A84493E7CDCAF490 00:14:50.045 UUID: ca356d1f-4353-4e59-a844-93e7cdcaf490 00:14:50.045 Thin Provisioning: Not Supported 00:14:50.045 Per-NS Atomic Units: Yes 00:14:50.045 Atomic Boundary Size (Normal): 0 00:14:50.045 Atomic Boundary Size (PFail): 0 00:14:50.045 Atomic Boundary Offset: 0 00:14:50.045 Maximum Single Source Range Length: 65535 00:14:50.045 Maximum Copy Length: 65535 00:14:50.045 Maximum Source Range Count: 1 00:14:50.045 NGUID/EUI64 Never Reused: No 00:14:50.045 Namespace Write Protected: No 00:14:50.045 Number of LBA Formats: 1 00:14:50.045 Current LBA Format: LBA Format #00 00:14:50.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.045 00:14:50.045 23:54:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.045 [2024-07-24 23:54:55.731949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.308 Initializing NVMe Controllers 00:14:55.308 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.308 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:55.308 Initialization complete. Launching workers. 00:14:55.308 ======================================================== 00:14:55.308 Latency(us) 00:14:55.308 Device Information : IOPS MiB/s Average min max 00:14:55.308 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35432.45 138.41 3612.02 1160.45 7657.98 00:14:55.308 ======================================================== 00:14:55.308 Total : 35432.45 138.41 3612.02 1160.45 7657.98 00:14:55.308 00:14:55.308 [2024-07-24 23:55:00.755017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.308 23:55:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.308 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.308 [2024-07-24 23:55:00.996228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.568 Initializing NVMe Controllers 00:15:00.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:00.568 Initialization complete. Launching workers. 00:15:00.568 ======================================================== 00:15:00.568 Latency(us) 00:15:00.568 Device Information : IOPS MiB/s Average min max 00:15:00.568 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16015.40 62.56 8002.26 5986.40 15740.67 00:15:00.568 ======================================================== 00:15:00.568 Total : 16015.40 62.56 8002.26 5986.40 15740.67 00:15:00.568 00:15:00.568 [2024-07-24 23:55:06.035526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.568 23:55:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.568 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.568 [2024-07-24 23:55:06.240552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.827 [2024-07-24 23:55:11.323496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.827 Initializing NVMe Controllers 00:15:05.827 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.827 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:05.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:05.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:05.827 Initialization complete. Launching workers. 00:15:05.827 Starting thread on core 2 00:15:05.827 Starting thread on core 3 00:15:05.827 Starting thread on core 1 00:15:05.827 23:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:05.827 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.085 [2024-07-24 23:55:11.628384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.367 [2024-07-24 23:55:14.687617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.367 Initializing NVMe Controllers 00:15:09.367 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.367 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:09.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:09.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:09.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:09.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:09.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:09.368 Initialization complete. Launching workers. 00:15:09.368 Starting thread on core 1 with urgent priority queue 00:15:09.368 Starting thread on core 2 with urgent priority queue 00:15:09.368 Starting thread on core 3 with urgent priority queue 00:15:09.368 Starting thread on core 0 with urgent priority queue 00:15:09.368 SPDK bdev Controller (SPDK1 ) core 0: 6162.33 IO/s 16.23 secs/100000 ios 00:15:09.368 SPDK bdev Controller (SPDK1 ) core 1: 5394.00 IO/s 18.54 secs/100000 ios 00:15:09.368 SPDK bdev Controller (SPDK1 ) core 2: 4809.67 IO/s 20.79 secs/100000 ios 00:15:09.368 SPDK bdev Controller (SPDK1 ) core 3: 5634.00 IO/s 17.75 secs/100000 ios 00:15:09.368 ======================================================== 00:15:09.368 00:15:09.368 23:55:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.368 [2024-07-24 23:55:14.979666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.368 Initializing NVMe Controllers 00:15:09.368 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.368 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.368 Namespace ID: 1 size: 0GB 00:15:09.368 Initialization complete. 00:15:09.368 INFO: using host memory buffer for IO 00:15:09.368 Hello world! 00:15:09.368 [2024-07-24 23:55:15.013322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.368 23:55:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.626 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.626 [2024-07-24 23:55:15.289359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.999 Initializing NVMe Controllers 00:15:10.999 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.999 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.999 Initialization complete. Launching workers. 00:15:10.999 submit (in ns) avg, min, max = 9261.8, 3543.3, 5998463.3 00:15:10.999 complete (in ns) avg, min, max = 28369.6, 2066.7, 5013511.1 00:15:10.999 00:15:10.999 Submit histogram 00:15:10.999 ================ 00:15:10.999 Range in us Cumulative Count 00:15:10.999 3.532 - 3.556: 0.0076% ( 1) 00:15:10.999 3.556 - 3.579: 0.4165% ( 54) 00:15:10.999 3.579 - 3.603: 2.3626% ( 257) 00:15:10.999 3.603 - 3.627: 7.7238% ( 708) 00:15:10.999 3.627 - 3.650: 17.6132% ( 1306) 00:15:10.999 3.650 - 3.674: 27.4194% ( 1295) 00:15:10.999 3.674 - 3.698: 37.6571% ( 1352) 00:15:10.999 3.698 - 3.721: 45.7595% ( 1070) 00:15:10.999 3.721 - 3.745: 51.8022% ( 798) 00:15:10.999 3.745 - 3.769: 56.4440% ( 613) 00:15:10.999 3.769 - 3.793: 60.1848% ( 494) 00:15:10.999 3.793 - 3.816: 63.4030% ( 425) 00:15:10.999 3.816 - 3.840: 66.0533% ( 350) 00:15:10.999 3.840 - 3.864: 69.6880% ( 480) 00:15:10.999 3.864 - 3.887: 74.0648% ( 578) 00:15:10.999 3.887 - 3.911: 78.8051% ( 626) 00:15:10.999 3.911 - 3.935: 82.6745% ( 511) 00:15:10.999 3.935 - 3.959: 85.3551% ( 354) 00:15:10.999 3.959 - 3.982: 87.5133% ( 285) 00:15:10.999 3.982 - 4.006: 89.1337% ( 214) 00:15:10.999 4.006 - 4.030: 90.3983% ( 167) 00:15:10.999 4.030 - 4.053: 91.5114% ( 147) 00:15:10.999 4.053 - 4.077: 92.3444% ( 110) 00:15:10.999 4.077 - 4.101: 93.2076% ( 114) 00:15:10.999 4.101 - 4.124: 94.1087% ( 119) 00:15:10.999 4.124 - 4.148: 94.8735% ( 101) 00:15:10.999 4.148 - 4.172: 95.4263% ( 73) 00:15:10.999 4.172 - 4.196: 95.8201% ( 52) 00:15:10.999 4.196 - 4.219: 96.0775% ( 34) 00:15:10.999 4.219 - 4.243: 96.2896% ( 28) 00:15:10.999 4.243 - 4.267: 96.4183% ( 17) 00:15:10.999 4.267 - 4.290: 96.6379% ( 29) 00:15:10.999 4.290 - 4.314: 96.7818% ( 19) 00:15:10.999 4.314 - 4.338: 96.9105% ( 17) 00:15:10.999 4.338 - 4.361: 96.9786% ( 9) 00:15:10.999 4.361 - 4.385: 97.1225% ( 19) 00:15:10.999 4.385 - 4.409: 97.1982% ( 10) 00:15:10.999 4.409 - 4.433: 97.2891% ( 12) 00:15:10.999 4.433 - 4.456: 97.3270% ( 5) 00:15:10.999 4.456 - 4.480: 97.3573% ( 4) 00:15:10.999 4.480 - 4.504: 97.3800% ( 3) 00:15:10.999 4.504 - 4.527: 97.3951% ( 2) 00:15:10.999 4.527 - 4.551: 97.4254% ( 4) 00:15:10.999 4.551 - 4.575: 97.4406% ( 2) 00:15:10.999 4.575 - 4.599: 97.4633% ( 3) 00:15:10.999 4.646 - 4.670: 97.4936% ( 4) 00:15:10.999 4.693 - 4.717: 97.5239% ( 4) 00:15:10.999 4.717 - 4.741: 97.5769% ( 7) 00:15:10.999 4.741 - 4.764: 97.5996% ( 3) 00:15:10.999 4.764 - 4.788: 97.6299% ( 4) 00:15:10.999 4.788 - 4.812: 97.6677% ( 5) 00:15:10.999 4.812 - 4.836: 97.6904% ( 3) 00:15:10.999 4.836 - 4.859: 97.7359% ( 6) 00:15:10.999 4.859 - 4.883: 97.8192% ( 11) 00:15:10.999 4.883 - 4.907: 97.8267% ( 1) 00:15:10.999 4.907 - 4.930: 97.8722% ( 6) 00:15:10.999 4.930 - 4.954: 97.9328% ( 8) 00:15:10.999 4.954 - 4.978: 97.9706% ( 5) 00:15:10.999 4.978 - 5.001: 98.0085% ( 5) 00:15:10.999 5.001 - 5.025: 98.0388% ( 4) 00:15:10.999 5.025 - 5.049: 98.0615% ( 3) 00:15:10.999 5.049 - 5.073: 98.0842% ( 3) 00:15:10.999 5.073 - 5.096: 98.1221% ( 5) 00:15:10.999 5.096 - 5.120: 98.1372% ( 2) 00:15:10.999 5.120 - 5.144: 98.1524% ( 2) 00:15:10.999 5.144 - 5.167: 98.1751% ( 3) 00:15:10.999 5.191 - 5.215: 98.1902% ( 2) 00:15:10.999 5.239 - 5.262: 98.1978% ( 1) 00:15:10.999 5.262 - 5.286: 98.2054% ( 1) 00:15:10.999 5.310 - 5.333: 98.2129% ( 1) 00:15:10.999 5.404 - 5.428: 98.2205% ( 1) 00:15:10.999 5.428 - 5.452: 98.2357% ( 2) 00:15:10.999 5.523 - 5.547: 98.2432% ( 1) 00:15:10.999 5.547 - 5.570: 98.2584% ( 2) 00:15:10.999 5.594 - 5.618: 98.2659% ( 1) 00:15:10.999 5.618 - 5.641: 98.2735% ( 1) 00:15:10.999 5.641 - 5.665: 98.2811% ( 1) 00:15:10.999 5.760 - 5.784: 98.2887% ( 1) 00:15:10.999 5.855 - 5.879: 98.2962% ( 1) 00:15:10.999 5.879 - 5.902: 98.3114% ( 2) 00:15:10.999 5.997 - 6.021: 98.3189% ( 1) 00:15:11.000 6.021 - 6.044: 98.3341% ( 2) 00:15:11.000 6.068 - 6.116: 98.3417% ( 1) 00:15:11.000 6.258 - 6.305: 98.3492% ( 1) 00:15:11.000 6.400 - 6.447: 98.3568% ( 1) 00:15:11.000 6.495 - 6.542: 98.3720% ( 2) 00:15:11.000 6.590 - 6.637: 98.3795% ( 1) 00:15:11.000 6.874 - 6.921: 98.3871% ( 1) 00:15:11.000 6.921 - 6.969: 98.3947% ( 1) 00:15:11.000 6.969 - 7.016: 98.4098% ( 2) 00:15:11.000 7.016 - 7.064: 98.4250% ( 2) 00:15:11.000 7.111 - 7.159: 98.4325% ( 1) 00:15:11.000 7.159 - 7.206: 98.4401% ( 1) 00:15:11.000 7.253 - 7.301: 98.4477% ( 1) 00:15:11.000 7.396 - 7.443: 98.4552% ( 1) 00:15:11.000 7.490 - 7.538: 98.4628% ( 1) 00:15:11.000 7.538 - 7.585: 98.4704% ( 1) 00:15:11.000 7.633 - 7.680: 98.4931% ( 3) 00:15:11.000 7.680 - 7.727: 98.5007% ( 1) 00:15:11.000 7.822 - 7.870: 98.5083% ( 1) 00:15:11.000 7.870 - 7.917: 98.5234% ( 2) 00:15:11.000 7.917 - 7.964: 98.5310% ( 1) 00:15:11.000 7.964 - 8.012: 98.5385% ( 1) 00:15:11.000 8.107 - 8.154: 98.5613% ( 3) 00:15:11.000 8.154 - 8.201: 98.5915% ( 4) 00:15:11.000 8.201 - 8.249: 98.5991% ( 1) 00:15:11.000 8.296 - 8.344: 98.6067% ( 1) 00:15:11.000 8.344 - 8.391: 98.6143% ( 1) 00:15:11.000 8.391 - 8.439: 98.6218% ( 1) 00:15:11.000 8.486 - 8.533: 98.6294% ( 1) 00:15:11.000 8.533 - 8.581: 98.6370% ( 1) 00:15:11.000 8.581 - 8.628: 98.6446% ( 1) 00:15:11.000 8.723 - 8.770: 98.6521% ( 1) 00:15:11.000 8.770 - 8.818: 98.6597% ( 1) 00:15:11.000 8.865 - 8.913: 98.6673% ( 1) 00:15:11.000 9.150 - 9.197: 98.6748% ( 1) 00:15:11.000 9.244 - 9.292: 98.6900% ( 2) 00:15:11.000 9.576 - 9.624: 98.6976% ( 1) 00:15:11.000 9.624 - 9.671: 98.7051% ( 1) 00:15:11.000 9.671 - 9.719: 98.7279% ( 3) 00:15:11.000 9.813 - 9.861: 98.7354% ( 1) 00:15:11.000 9.908 - 9.956: 98.7430% ( 1) 00:15:11.000 10.050 - 10.098: 98.7506% ( 1) 00:15:11.000 10.382 - 10.430: 98.7733% ( 3) 00:15:11.000 10.714 - 10.761: 98.7809% ( 1) 00:15:11.000 10.761 - 10.809: 98.7884% ( 1) 00:15:11.000 10.809 - 10.856: 98.7960% ( 1) 00:15:11.000 10.999 - 11.046: 98.8036% ( 1) 00:15:11.000 11.425 - 11.473: 98.8111% ( 1) 00:15:11.000 11.520 - 11.567: 98.8187% ( 1) 00:15:11.000 11.615 - 11.662: 98.8414% ( 3) 00:15:11.000 11.757 - 11.804: 98.8490% ( 1) 00:15:11.000 12.231 - 12.326: 98.8566% ( 1) 00:15:11.000 12.326 - 12.421: 98.8642% ( 1) 00:15:11.000 12.421 - 12.516: 98.8869% ( 3) 00:15:11.000 12.610 - 12.705: 98.9020% ( 2) 00:15:11.000 12.705 - 12.800: 98.9172% ( 2) 00:15:11.000 12.895 - 12.990: 98.9323% ( 2) 00:15:11.000 13.464 - 13.559: 98.9399% ( 1) 00:15:11.000 13.559 - 13.653: 98.9474% ( 1) 00:15:11.000 13.843 - 13.938: 98.9702% ( 3) 00:15:11.000 14.033 - 14.127: 98.9929% ( 3) 00:15:11.000 14.222 - 14.317: 99.0005% ( 1) 00:15:11.000 14.317 - 14.412: 99.0080% ( 1) 00:15:11.000 14.507 - 14.601: 99.0232% ( 2) 00:15:11.000 14.601 - 14.696: 99.0307% ( 1) 00:15:11.000 14.696 - 14.791: 99.0459% ( 2) 00:15:11.000 14.791 - 14.886: 99.0610% ( 2) 00:15:11.000 17.067 - 17.161: 99.0762% ( 2) 00:15:11.000 17.351 - 17.446: 99.0989% ( 3) 00:15:11.000 17.446 - 17.541: 99.1368% ( 5) 00:15:11.000 17.541 - 17.636: 99.1822% ( 6) 00:15:11.000 17.730 - 17.825: 99.2276% ( 6) 00:15:11.000 17.825 - 17.920: 99.2806% ( 7) 00:15:11.000 17.920 - 18.015: 99.3488% ( 9) 00:15:11.000 18.015 - 18.110: 99.3715% ( 3) 00:15:11.000 18.110 - 18.204: 99.3942% ( 3) 00:15:11.000 18.204 - 18.299: 99.4548% ( 8) 00:15:11.000 18.299 - 18.394: 99.5154% ( 8) 00:15:11.000 18.394 - 18.489: 99.5608% ( 6) 00:15:11.000 18.489 - 18.584: 99.6214% ( 8) 00:15:11.000 18.584 - 18.679: 99.6668% ( 6) 00:15:11.000 18.679 - 18.773: 99.6820% ( 2) 00:15:11.000 18.773 - 18.868: 99.7047% ( 3) 00:15:11.000 18.868 - 18.963: 99.7501% ( 6) 00:15:11.000 18.963 - 19.058: 99.7880% ( 5) 00:15:11.000 19.058 - 19.153: 99.8031% ( 2) 00:15:11.000 19.532 - 19.627: 99.8107% ( 1) 00:15:11.000 19.627 - 19.721: 99.8183% ( 1) 00:15:11.000 19.721 - 19.816: 99.8258% ( 1) 00:15:11.000 19.816 - 19.911: 99.8334% ( 1) 00:15:11.000 20.101 - 20.196: 99.8410% ( 1) 00:15:11.000 20.196 - 20.290: 99.8486% ( 1) 00:15:11.000 20.764 - 20.859: 99.8561% ( 1) 00:15:11.000 22.471 - 22.566: 99.8637% ( 1) 00:15:11.000 23.419 - 23.514: 99.8713% ( 1) 00:15:11.000 3980.705 - 4004.978: 99.9470% ( 10) 00:15:11.000 4004.978 - 4029.250: 99.9924% ( 6) 00:15:11.000 5995.330 - 6019.603: 100.0000% ( 1) 00:15:11.000 00:15:11.000 Complete histogram 00:15:11.000 ================== 00:15:11.000 Range in us Cumulative Count 00:15:11.000 2.062 - 2.074: 1.9234% ( 254) 00:15:11.000 2.074 - 2.086: 34.5298% ( 4306) 00:15:11.000 2.086 - 2.098: 42.4807% ( 1050) 00:15:11.000 2.098 - 2.110: 45.8731% ( 448) 00:15:11.000 2.110 - 2.121: 58.4886% ( 1666) 00:15:11.000 2.121 - 2.133: 61.2979% ( 371) 00:15:11.000 2.133 - 2.145: 65.7580% ( 589) 00:15:11.000 2.145 - 2.157: 74.8296% ( 1198) 00:15:11.000 2.157 - 2.169: 76.5864% ( 232) 00:15:11.000 2.169 - 2.181: 78.7218% ( 282) 00:15:11.000 2.181 - 2.193: 81.9325% ( 424) 00:15:11.000 2.193 - 2.204: 82.7427% ( 107) 00:15:11.000 2.204 - 2.216: 84.0678% ( 175) 00:15:11.000 2.216 - 2.228: 88.2705% ( 555) 00:15:11.000 2.228 - 2.240: 90.5800% ( 305) 00:15:11.000 2.240 - 2.252: 92.0264% ( 191) 00:15:11.000 2.252 - 2.264: 93.3288% ( 172) 00:15:11.000 2.264 - 2.276: 93.6393% ( 41) 00:15:11.000 2.276 - 2.287: 93.8589% ( 29) 00:15:11.000 2.287 - 2.299: 94.3283% ( 62) 00:15:11.000 2.299 - 2.311: 94.8433% ( 68) 00:15:11.000 2.311 - 2.323: 95.2824% ( 58) 00:15:11.000 2.323 - 2.335: 95.3809% ( 13) 00:15:11.000 2.335 - 2.347: 95.4263% ( 6) 00:15:11.000 2.347 - 2.359: 95.4642% ( 5) 00:15:11.000 2.359 - 2.370: 95.5020% ( 5) 00:15:11.000 2.370 - 2.382: 95.6838% ( 24) 00:15:11.000 2.382 - 2.394: 96.0473% ( 48) 00:15:11.000 2.394 - 2.406: 96.3880% ( 45) 00:15:11.000 2.406 - 2.418: 96.5925% ( 27) 00:15:11.000 2.418 - 2.430: 96.7969% ( 27) 00:15:11.000 2.430 - 2.441: 97.0089% ( 28) 00:15:11.000 2.441 - 2.453: 97.2664% ( 34) 00:15:11.000 2.453 - 2.465: 97.4406% ( 23) 00:15:11.000 2.465 - 2.477: 97.6677% ( 30) 00:15:11.000 2.477 - 2.489: 97.8343% ( 22) 00:15:11.000 2.489 - 2.501: 97.9630% ( 17) 00:15:11.000 2.501 - 2.513: 98.0463% ( 11) 00:15:11.000 2.513 - 2.524: 98.1599% ( 15) 00:15:11.000 2.524 - 2.536: 98.2205% ( 8) 00:15:11.000 2.536 - 2.548: 98.2659% ( 6) 00:15:11.000 2.548 - 2.560: 98.2887% ( 3) 00:15:11.000 2.560 - 2.572: 98.3417% ( 7) 00:15:11.000 2.572 - 2.584: 98.3720% ( 4) 00:15:11.000 2.584 - 2.596: 98.3871% ( 2) 00:15:11.000 2.596 - 2.607: 98.3947% ( 1) 00:15:11.000 2.607 - 2.619: 98.4022% ( 1) 00:15:11.000 2.619 - 2.631: 98.4174% ( 2) 00:15:11.000 2.631 - 2.643: 98.4401% ( 3) 00:15:11.000 2.643 - 2.655: 98.4477% ( 1) 00:15:11.000 2.690 - 2.702: 98.4628% ( 2) 00:15:11.000 2.702 - 2.714: 98.4704% ( 1) 00:15:11.000 2.797 - 2.809: 98.4780% ( 1) 00:15:11.000 2.833 - 2.844: 98.4855% ( 1) 00:15:11.000 2.856 - 2.868: 98.4931% ( 1) 00:15:11.000 2.868 - 2.880: 98.5007% ( 1) 00:15:11.000 2.963 - 2.975: 98.5083% ( 1) 00:15:11.000 2.987 - 2.999: 98.5158% ( 1) 00:15:11.000 2.999 - 3.010: 98.5234% ( 1) 00:15:11.000 3.366 - 3.390: 98.5310% ( 1) 00:15:11.000 3.484 - 3.508: 98.5385% ( 1) 00:15:11.000 3.508 - 3.532: 98.5461% ( 1) 00:15:11.001 3.532 - 3.556: 98.5537% ( 1) 00:15:11.001 3.556 - 3.579: 98.5613% ( 1) 00:15:11.001 3.579 - 3.603: 98.5688% ( 1) 00:15:11.001 3.603 - 3.627: 98.5764% ( 1) 00:15:11.001 3.627 - 3.650: 98.5840% ( 1) 00:15:11.001 3.650 - 3.674: 98.5915% ( 1) 00:15:11.001 3.698 - 3.721: 98.6067% ( 2) 00:15:11.001 3.745 - 3.769: 98.6143% ( 1) 00:15:11.001 3.769 - 3.793: 98.6218% ( 1) 00:15:11.001 3.793 - 3.816: 98.6294% ( 1) 00:15:11.001 3.887 - 3.911: 9[2024-07-24 23:55:16.305491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.001 8.6370% ( 1) 00:15:11.001 3.911 - 3.935: 98.6446% ( 1) 00:15:11.001 4.196 - 4.219: 98.6521% ( 1) 00:15:11.001 5.428 - 5.452: 98.6673% ( 2) 00:15:11.001 6.400 - 6.447: 98.6748% ( 1) 00:15:11.001 6.447 - 6.495: 98.6900% ( 2) 00:15:11.001 6.542 - 6.590: 98.6976% ( 1) 00:15:11.001 6.590 - 6.637: 98.7051% ( 1) 00:15:11.001 6.637 - 6.684: 98.7127% ( 1) 00:15:11.001 6.732 - 6.779: 98.7203% ( 1) 00:15:11.001 7.917 - 7.964: 98.7279% ( 1) 00:15:11.001 7.964 - 8.012: 98.7430% ( 2) 00:15:11.001 8.486 - 8.533: 98.7506% ( 1) 00:15:11.001 10.619 - 10.667: 98.7581% ( 1) 00:15:11.001 14.507 - 14.601: 98.7657% ( 1) 00:15:11.001 15.360 - 15.455: 98.7733% ( 1) 00:15:11.001 15.455 - 15.550: 98.7809% ( 1) 00:15:11.001 15.550 - 15.644: 98.7884% ( 1) 00:15:11.001 15.644 - 15.739: 98.7960% ( 1) 00:15:11.001 15.739 - 15.834: 98.8036% ( 1) 00:15:11.001 15.834 - 15.929: 98.8111% ( 1) 00:15:11.001 15.929 - 16.024: 98.8414% ( 4) 00:15:11.001 16.024 - 16.119: 98.8944% ( 7) 00:15:11.001 16.119 - 16.213: 98.9096% ( 2) 00:15:11.001 16.213 - 16.308: 98.9626% ( 7) 00:15:11.001 16.308 - 16.403: 99.0005% ( 5) 00:15:11.001 16.403 - 16.498: 99.0232% ( 3) 00:15:11.001 16.498 - 16.593: 99.0535% ( 4) 00:15:11.001 16.593 - 16.687: 99.0913% ( 5) 00:15:11.001 16.687 - 16.782: 99.1292% ( 5) 00:15:11.001 16.782 - 16.877: 99.1898% ( 8) 00:15:11.001 16.877 - 16.972: 99.2125% ( 3) 00:15:11.001 16.972 - 17.067: 99.2201% ( 1) 00:15:11.001 17.161 - 17.256: 99.2503% ( 4) 00:15:11.001 17.541 - 17.636: 99.2579% ( 1) 00:15:11.001 17.730 - 17.825: 99.2882% ( 4) 00:15:11.001 17.825 - 17.920: 99.3033% ( 2) 00:15:11.001 17.920 - 18.015: 99.3185% ( 2) 00:15:11.001 18.394 - 18.489: 99.3261% ( 1) 00:15:11.001 20.764 - 20.859: 99.3336% ( 1) 00:15:11.001 21.144 - 21.239: 99.3412% ( 1) 00:15:11.001 23.893 - 23.988: 99.3488% ( 1) 00:15:11.001 3980.705 - 4004.978: 99.7955% ( 59) 00:15:11.001 4004.978 - 4029.250: 99.9924% ( 26) 00:15:11.001 5000.154 - 5024.427: 100.0000% ( 1) 00:15:11.001 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.001 [ 00:15:11.001 { 00:15:11.001 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.001 "subtype": "Discovery", 00:15:11.001 "listen_addresses": [], 00:15:11.001 "allow_any_host": true, 00:15:11.001 "hosts": [] 00:15:11.001 }, 00:15:11.001 { 00:15:11.001 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.001 "subtype": "NVMe", 00:15:11.001 "listen_addresses": [ 00:15:11.001 { 00:15:11.001 "trtype": "VFIOUSER", 00:15:11.001 "adrfam": "IPv4", 00:15:11.001 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.001 "trsvcid": "0" 00:15:11.001 } 00:15:11.001 ], 00:15:11.001 "allow_any_host": true, 00:15:11.001 "hosts": [], 00:15:11.001 "serial_number": "SPDK1", 00:15:11.001 "model_number": "SPDK bdev Controller", 00:15:11.001 "max_namespaces": 32, 00:15:11.001 "min_cntlid": 1, 00:15:11.001 "max_cntlid": 65519, 00:15:11.001 "namespaces": [ 00:15:11.001 { 00:15:11.001 "nsid": 1, 00:15:11.001 "bdev_name": "Malloc1", 00:15:11.001 "name": "Malloc1", 00:15:11.001 "nguid": "CA356D1F43534E59A84493E7CDCAF490", 00:15:11.001 "uuid": "ca356d1f-4353-4e59-a844-93e7cdcaf490" 00:15:11.001 } 00:15:11.001 ] 00:15:11.001 }, 00:15:11.001 { 00:15:11.001 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.001 "subtype": "NVMe", 00:15:11.001 "listen_addresses": [ 00:15:11.001 { 00:15:11.001 "trtype": "VFIOUSER", 00:15:11.001 "adrfam": "IPv4", 00:15:11.001 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.001 "trsvcid": "0" 00:15:11.001 } 00:15:11.001 ], 00:15:11.001 "allow_any_host": true, 00:15:11.001 "hosts": [], 00:15:11.001 "serial_number": "SPDK2", 00:15:11.001 "model_number": "SPDK bdev Controller", 00:15:11.001 "max_namespaces": 32, 00:15:11.001 "min_cntlid": 1, 00:15:11.001 "max_cntlid": 65519, 00:15:11.001 "namespaces": [ 00:15:11.001 { 00:15:11.001 "nsid": 1, 00:15:11.001 "bdev_name": "Malloc2", 00:15:11.001 "name": "Malloc2", 00:15:11.001 "nguid": "03BDD86106F546079853CAA4FCA2327F", 00:15:11.001 "uuid": "03bdd861-06f5-4607-9853-caa4fca2327f" 00:15:11.001 } 00:15:11.001 ] 00:15:11.001 } 00:15:11.001 ] 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=750244 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.001 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.001 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.259 [2024-07-24 23:55:16.740570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.259 Malloc3 00:15:11.259 23:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:11.517 [2024-07-24 23:55:17.105052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.517 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.517 Asynchronous Event Request test 00:15:11.517 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.517 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.517 Registering asynchronous event callbacks... 00:15:11.517 Starting namespace attribute notice tests for all controllers... 00:15:11.517 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.517 aer_cb - Changed Namespace 00:15:11.517 Cleaning up... 00:15:11.775 [ 00:15:11.775 { 00:15:11.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.775 "subtype": "Discovery", 00:15:11.775 "listen_addresses": [], 00:15:11.775 "allow_any_host": true, 00:15:11.775 "hosts": [] 00:15:11.775 }, 00:15:11.775 { 00:15:11.775 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.775 "subtype": "NVMe", 00:15:11.775 "listen_addresses": [ 00:15:11.775 { 00:15:11.775 "trtype": "VFIOUSER", 00:15:11.775 "adrfam": "IPv4", 00:15:11.775 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.775 "trsvcid": "0" 00:15:11.775 } 00:15:11.775 ], 00:15:11.775 "allow_any_host": true, 00:15:11.775 "hosts": [], 00:15:11.775 "serial_number": "SPDK1", 00:15:11.775 "model_number": "SPDK bdev Controller", 00:15:11.775 "max_namespaces": 32, 00:15:11.775 "min_cntlid": 1, 00:15:11.775 "max_cntlid": 65519, 00:15:11.775 "namespaces": [ 00:15:11.775 { 00:15:11.775 "nsid": 1, 00:15:11.775 "bdev_name": "Malloc1", 00:15:11.775 "name": "Malloc1", 00:15:11.775 "nguid": "CA356D1F43534E59A84493E7CDCAF490", 00:15:11.775 "uuid": "ca356d1f-4353-4e59-a844-93e7cdcaf490" 00:15:11.775 }, 00:15:11.775 { 00:15:11.775 "nsid": 2, 00:15:11.775 "bdev_name": "Malloc3", 00:15:11.775 "name": "Malloc3", 00:15:11.776 "nguid": "9A2A03C1FF70418BB990590170AAE89C", 00:15:11.776 "uuid": "9a2a03c1-ff70-418b-b990-590170aae89c" 00:15:11.776 } 00:15:11.776 ] 00:15:11.776 }, 00:15:11.776 { 00:15:11.776 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.776 "subtype": "NVMe", 00:15:11.776 "listen_addresses": [ 00:15:11.776 { 00:15:11.776 "trtype": "VFIOUSER", 00:15:11.776 "adrfam": "IPv4", 00:15:11.776 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.776 "trsvcid": "0" 00:15:11.776 } 00:15:11.776 ], 00:15:11.776 "allow_any_host": true, 00:15:11.776 "hosts": [], 00:15:11.776 "serial_number": "SPDK2", 00:15:11.776 "model_number": "SPDK bdev Controller", 00:15:11.776 "max_namespaces": 32, 00:15:11.776 "min_cntlid": 1, 00:15:11.776 "max_cntlid": 65519, 00:15:11.776 "namespaces": [ 00:15:11.776 { 00:15:11.776 "nsid": 1, 00:15:11.776 "bdev_name": "Malloc2", 00:15:11.776 "name": "Malloc2", 00:15:11.776 "nguid": "03BDD86106F546079853CAA4FCA2327F", 00:15:11.776 "uuid": "03bdd861-06f5-4607-9853-caa4fca2327f" 00:15:11.776 } 00:15:11.776 ] 00:15:11.776 } 00:15:11.776 ] 00:15:11.776 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 750244 00:15:11.776 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.776 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:11.776 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:11.776 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.776 [2024-07-24 23:55:17.386840] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:11.776 [2024-07-24 23:55:17.386883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750377 ] 00:15:11.776 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.776 [2024-07-24 23:55:17.420239] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:11.776 [2024-07-24 23:55:17.429397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.776 [2024-07-24 23:55:17.429428] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff5cc38a000 00:15:11.776 [2024-07-24 23:55:17.430395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.431396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.432422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.433416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.434441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.435463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.436462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.437464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.776 [2024-07-24 23:55:17.438494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.776 [2024-07-24 23:55:17.438516] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff5cb140000 00:15:11.776 [2024-07-24 23:55:17.439631] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.776 [2024-07-24 23:55:17.454798] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:11.776 [2024-07-24 23:55:17.454830] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:11.776 [2024-07-24 23:55:17.459932] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.776 [2024-07-24 23:55:17.459984] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.776 [2024-07-24 23:55:17.460068] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:11.776 [2024-07-24 23:55:17.460113] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:11.776 [2024-07-24 23:55:17.460126] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:11.776 [2024-07-24 23:55:17.460937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:11.776 [2024-07-24 23:55:17.460963] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:11.776 [2024-07-24 23:55:17.460977] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:11.776 [2024-07-24 23:55:17.461945] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.776 [2024-07-24 23:55:17.461965] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:11.776 [2024-07-24 23:55:17.461978] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.462953] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:11.776 [2024-07-24 23:55:17.462973] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.463963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:11.776 [2024-07-24 23:55:17.463983] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:11.776 [2024-07-24 23:55:17.463992] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.464004] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.464123] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:11.776 [2024-07-24 23:55:17.464133] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.464142] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:11.776 [2024-07-24 23:55:17.464971] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:11.776 [2024-07-24 23:55:17.465975] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:11.776 [2024-07-24 23:55:17.466987] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:11.776 [2024-07-24 23:55:17.467988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.776 [2024-07-24 23:55:17.468054] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.776 [2024-07-24 23:55:17.469003] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:11.776 [2024-07-24 23:55:17.469022] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.776 [2024-07-24 23:55:17.469031] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:11.776 [2024-07-24 23:55:17.469055] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:11.776 [2024-07-24 23:55:17.469071] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.776 [2024-07-24 23:55:17.469120] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.776 [2024-07-24 23:55:17.469132] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.776 [2024-07-24 23:55:17.469152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.776 [2024-07-24 23:55:17.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.776 [2024-07-24 23:55:17.477149] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:11.776 [2024-07-24 23:55:17.477160] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:11.776 [2024-07-24 23:55:17.477168] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:11.776 [2024-07-24 23:55:17.477176] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.776 [2024-07-24 23:55:17.477184] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:11.777 [2024-07-24 23:55:17.477192] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:11.777 [2024-07-24 23:55:17.477200] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:11.777 [2024-07-24 23:55:17.477213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.777 [2024-07-24 23:55:17.477229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.777 [2024-07-24 23:55:17.485113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.777 [2024-07-24 23:55:17.485138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.777 [2024-07-24 23:55:17.485152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.777 [2024-07-24 23:55:17.485165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.777 [2024-07-24 23:55:17.485178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.777 [2024-07-24 23:55:17.485187] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.777 [2024-07-24 23:55:17.485205] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.777 [2024-07-24 23:55:17.485220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.493115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.493134] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:12.036 [2024-07-24 23:55:17.493143] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.493155] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.493174] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.493189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.501115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.501193] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.501209] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.501223] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:12.036 [2024-07-24 23:55:17.501232] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:12.036 [2024-07-24 23:55:17.501242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.509129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.509151] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:12.036 [2024-07-24 23:55:17.509167] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.509181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.509193] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.036 [2024-07-24 23:55:17.509201] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.036 [2024-07-24 23:55:17.509211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.517113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.517143] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.517159] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.517172] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.036 [2024-07-24 23:55:17.517181] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.036 [2024-07-24 23:55:17.517190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.525137] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525150] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525167] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525182] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525191] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525200] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:12.036 [2024-07-24 23:55:17.525208] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:12.036 [2024-07-24 23:55:17.525216] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:12.036 [2024-07-24 23:55:17.525246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.533115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.533141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.541112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.541138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.549115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.549141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.557127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.557155] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:12.036 [2024-07-24 23:55:17.557165] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:12.036 [2024-07-24 23:55:17.557171] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:12.036 [2024-07-24 23:55:17.557178] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:12.036 [2024-07-24 23:55:17.557187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:12.036 [2024-07-24 23:55:17.557199] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:12.036 [2024-07-24 23:55:17.557207] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:12.036 [2024-07-24 23:55:17.557217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.557242] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:12.036 [2024-07-24 23:55:17.557251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.036 [2024-07-24 23:55:17.557260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.557273] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:12.036 [2024-07-24 23:55:17.557282] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:12.036 [2024-07-24 23:55:17.557291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:12.036 [2024-07-24 23:55:17.565129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.565160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.565176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:12.036 [2024-07-24 23:55:17.565190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:12.036 ===================================================== 00:15:12.037 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.037 ===================================================== 00:15:12.037 Controller Capabilities/Features 00:15:12.037 ================================ 00:15:12.037 Vendor ID: 4e58 00:15:12.037 Subsystem Vendor ID: 4e58 00:15:12.037 Serial Number: SPDK2 00:15:12.037 Model Number: SPDK bdev Controller 00:15:12.037 Firmware Version: 24.05.1 00:15:12.037 Recommended Arb Burst: 6 00:15:12.037 IEEE OUI Identifier: 8d 6b 50 00:15:12.037 Multi-path I/O 00:15:12.037 May have multiple subsystem ports: Yes 00:15:12.037 May have multiple controllers: Yes 00:15:12.037 Associated with SR-IOV VF: No 00:15:12.037 Max Data Transfer Size: 131072 00:15:12.037 Max Number of Namespaces: 32 00:15:12.037 Max Number of I/O Queues: 127 00:15:12.037 NVMe Specification Version (VS): 1.3 00:15:12.037 NVMe Specification Version (Identify): 1.3 00:15:12.037 Maximum Queue Entries: 256 00:15:12.037 Contiguous Queues Required: Yes 00:15:12.037 Arbitration Mechanisms Supported 00:15:12.037 Weighted Round Robin: Not Supported 00:15:12.037 Vendor Specific: Not Supported 00:15:12.037 Reset Timeout: 15000 ms 00:15:12.037 Doorbell Stride: 4 bytes 00:15:12.037 NVM Subsystem Reset: Not Supported 00:15:12.037 Command Sets Supported 00:15:12.037 NVM Command Set: Supported 00:15:12.037 Boot Partition: Not Supported 00:15:12.037 Memory Page Size Minimum: 4096 bytes 00:15:12.037 Memory Page Size Maximum: 4096 bytes 00:15:12.037 Persistent Memory Region: Not Supported 00:15:12.037 Optional Asynchronous Events Supported 00:15:12.037 Namespace Attribute Notices: Supported 00:15:12.037 Firmware Activation Notices: Not Supported 00:15:12.037 ANA Change Notices: Not Supported 00:15:12.037 PLE Aggregate Log Change Notices: Not Supported 00:15:12.037 LBA Status Info Alert Notices: Not Supported 00:15:12.037 EGE Aggregate Log Change Notices: Not Supported 00:15:12.037 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.037 Zone Descriptor Change Notices: Not Supported 00:15:12.037 Discovery Log Change Notices: Not Supported 00:15:12.037 Controller Attributes 00:15:12.037 128-bit Host Identifier: Supported 00:15:12.037 Non-Operational Permissive Mode: Not Supported 00:15:12.037 NVM Sets: Not Supported 00:15:12.037 Read Recovery Levels: Not Supported 00:15:12.037 Endurance Groups: Not Supported 00:15:12.037 Predictable Latency Mode: Not Supported 00:15:12.037 Traffic Based Keep ALive: Not Supported 00:15:12.037 Namespace Granularity: Not Supported 00:15:12.037 SQ Associations: Not Supported 00:15:12.037 UUID List: Not Supported 00:15:12.037 Multi-Domain Subsystem: Not Supported 00:15:12.037 Fixed Capacity Management: Not Supported 00:15:12.037 Variable Capacity Management: Not Supported 00:15:12.037 Delete Endurance Group: Not Supported 00:15:12.037 Delete NVM Set: Not Supported 00:15:12.037 Extended LBA Formats Supported: Not Supported 00:15:12.037 Flexible Data Placement Supported: Not Supported 00:15:12.037 00:15:12.037 Controller Memory Buffer Support 00:15:12.037 ================================ 00:15:12.037 Supported: No 00:15:12.037 00:15:12.037 Persistent Memory Region Support 00:15:12.037 ================================ 00:15:12.037 Supported: No 00:15:12.037 00:15:12.037 Admin Command Set Attributes 00:15:12.037 ============================ 00:15:12.037 Security Send/Receive: Not Supported 00:15:12.037 Format NVM: Not Supported 00:15:12.037 Firmware Activate/Download: Not Supported 00:15:12.037 Namespace Management: Not Supported 00:15:12.037 Device Self-Test: Not Supported 00:15:12.037 Directives: Not Supported 00:15:12.037 NVMe-MI: Not Supported 00:15:12.037 Virtualization Management: Not Supported 00:15:12.037 Doorbell Buffer Config: Not Supported 00:15:12.037 Get LBA Status Capability: Not Supported 00:15:12.037 Command & Feature Lockdown Capability: Not Supported 00:15:12.037 Abort Command Limit: 4 00:15:12.037 Async Event Request Limit: 4 00:15:12.037 Number of Firmware Slots: N/A 00:15:12.037 Firmware Slot 1 Read-Only: N/A 00:15:12.037 Firmware Activation Without Reset: N/A 00:15:12.037 Multiple Update Detection Support: N/A 00:15:12.037 Firmware Update Granularity: No Information Provided 00:15:12.037 Per-Namespace SMART Log: No 00:15:12.037 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.037 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:12.037 Command Effects Log Page: Supported 00:15:12.037 Get Log Page Extended Data: Supported 00:15:12.037 Telemetry Log Pages: Not Supported 00:15:12.037 Persistent Event Log Pages: Not Supported 00:15:12.037 Supported Log Pages Log Page: May Support 00:15:12.037 Commands Supported & Effects Log Page: Not Supported 00:15:12.037 Feature Identifiers & Effects Log Page:May Support 00:15:12.037 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.037 Data Area 4 for Telemetry Log: Not Supported 00:15:12.037 Error Log Page Entries Supported: 128 00:15:12.037 Keep Alive: Supported 00:15:12.037 Keep Alive Granularity: 10000 ms 00:15:12.037 00:15:12.037 NVM Command Set Attributes 00:15:12.037 ========================== 00:15:12.037 Submission Queue Entry Size 00:15:12.037 Max: 64 00:15:12.037 Min: 64 00:15:12.037 Completion Queue Entry Size 00:15:12.037 Max: 16 00:15:12.037 Min: 16 00:15:12.037 Number of Namespaces: 32 00:15:12.037 Compare Command: Supported 00:15:12.037 Write Uncorrectable Command: Not Supported 00:15:12.037 Dataset Management Command: Supported 00:15:12.037 Write Zeroes Command: Supported 00:15:12.037 Set Features Save Field: Not Supported 00:15:12.037 Reservations: Not Supported 00:15:12.037 Timestamp: Not Supported 00:15:12.037 Copy: Supported 00:15:12.037 Volatile Write Cache: Present 00:15:12.037 Atomic Write Unit (Normal): 1 00:15:12.037 Atomic Write Unit (PFail): 1 00:15:12.037 Atomic Compare & Write Unit: 1 00:15:12.037 Fused Compare & Write: Supported 00:15:12.037 Scatter-Gather List 00:15:12.037 SGL Command Set: Supported (Dword aligned) 00:15:12.037 SGL Keyed: Not Supported 00:15:12.037 SGL Bit Bucket Descriptor: Not Supported 00:15:12.037 SGL Metadata Pointer: Not Supported 00:15:12.037 Oversized SGL: Not Supported 00:15:12.037 SGL Metadata Address: Not Supported 00:15:12.037 SGL Offset: Not Supported 00:15:12.037 Transport SGL Data Block: Not Supported 00:15:12.037 Replay Protected Memory Block: Not Supported 00:15:12.037 00:15:12.037 Firmware Slot Information 00:15:12.037 ========================= 00:15:12.037 Active slot: 1 00:15:12.037 Slot 1 Firmware Revision: 24.05.1 00:15:12.037 00:15:12.037 00:15:12.037 Commands Supported and Effects 00:15:12.037 ============================== 00:15:12.037 Admin Commands 00:15:12.037 -------------- 00:15:12.037 Get Log Page (02h): Supported 00:15:12.037 Identify (06h): Supported 00:15:12.037 Abort (08h): Supported 00:15:12.037 Set Features (09h): Supported 00:15:12.037 Get Features (0Ah): Supported 00:15:12.037 Asynchronous Event Request (0Ch): Supported 00:15:12.037 Keep Alive (18h): Supported 00:15:12.037 I/O Commands 00:15:12.037 ------------ 00:15:12.037 Flush (00h): Supported LBA-Change 00:15:12.037 Write (01h): Supported LBA-Change 00:15:12.037 Read (02h): Supported 00:15:12.037 Compare (05h): Supported 00:15:12.037 Write Zeroes (08h): Supported LBA-Change 00:15:12.037 Dataset Management (09h): Supported LBA-Change 00:15:12.037 Copy (19h): Supported LBA-Change 00:15:12.037 Unknown (79h): Supported LBA-Change 00:15:12.037 Unknown (7Ah): Supported 00:15:12.037 00:15:12.037 Error Log 00:15:12.037 ========= 00:15:12.037 00:15:12.037 Arbitration 00:15:12.037 =========== 00:15:12.037 Arbitration Burst: 1 00:15:12.037 00:15:12.037 Power Management 00:15:12.037 ================ 00:15:12.037 Number of Power States: 1 00:15:12.037 Current Power State: Power State #0 00:15:12.037 Power State #0: 00:15:12.037 Max Power: 0.00 W 00:15:12.037 Non-Operational State: Operational 00:15:12.037 Entry Latency: Not Reported 00:15:12.037 Exit Latency: Not Reported 00:15:12.037 Relative Read Throughput: 0 00:15:12.037 Relative Read Latency: 0 00:15:12.037 Relative Write Throughput: 0 00:15:12.037 Relative Write Latency: 0 00:15:12.037 Idle Power: Not Reported 00:15:12.037 Active Power: Not Reported 00:15:12.038 Non-Operational Permissive Mode: Not Supported 00:15:12.038 00:15:12.038 Health Information 00:15:12.038 ================== 00:15:12.038 Critical Warnings: 00:15:12.038 Available Spare Space: OK 00:15:12.038 Temperature: OK 00:15:12.038 Device Reliability: OK 00:15:12.038 Read Only: No 00:15:12.038 Volatile Memory Backup: OK 00:15:12.038 Current Temperature: 0 Kelvin[2024-07-24 23:55:17.565319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:12.038 [2024-07-24 23:55:17.573113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:12.038 [2024-07-24 23:55:17.573160] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:12.038 [2024-07-24 23:55:17.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.038 [2024-07-24 23:55:17.573188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.038 [2024-07-24 23:55:17.573198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.038 [2024-07-24 23:55:17.573208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.038 [2024-07-24 23:55:17.573272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.038 [2024-07-24 23:55:17.573292] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:12.038 [2024-07-24 23:55:17.574278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.038 [2024-07-24 23:55:17.574348] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:12.038 [2024-07-24 23:55:17.574363] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:12.038 [2024-07-24 23:55:17.575291] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:12.038 [2024-07-24 23:55:17.575316] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:12.038 [2024-07-24 23:55:17.575368] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:12.038 [2024-07-24 23:55:17.576623] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.038 (-273 Celsius) 00:15:12.038 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:12.038 Available Spare: 0% 00:15:12.038 Available Spare Threshold: 0% 00:15:12.038 Life Percentage Used: 0% 00:15:12.038 Data Units Read: 0 00:15:12.038 Data Units Written: 0 00:15:12.038 Host Read Commands: 0 00:15:12.038 Host Write Commands: 0 00:15:12.038 Controller Busy Time: 0 minutes 00:15:12.038 Power Cycles: 0 00:15:12.038 Power On Hours: 0 hours 00:15:12.038 Unsafe Shutdowns: 0 00:15:12.038 Unrecoverable Media Errors: 0 00:15:12.038 Lifetime Error Log Entries: 0 00:15:12.038 Warning Temperature Time: 0 minutes 00:15:12.038 Critical Temperature Time: 0 minutes 00:15:12.038 00:15:12.038 Number of Queues 00:15:12.038 ================ 00:15:12.038 Number of I/O Submission Queues: 127 00:15:12.038 Number of I/O Completion Queues: 127 00:15:12.038 00:15:12.038 Active Namespaces 00:15:12.038 ================= 00:15:12.038 Namespace ID:1 00:15:12.038 Error Recovery Timeout: Unlimited 00:15:12.038 Command Set Identifier: NVM (00h) 00:15:12.038 Deallocate: Supported 00:15:12.038 Deallocated/Unwritten Error: Not Supported 00:15:12.038 Deallocated Read Value: Unknown 00:15:12.038 Deallocate in Write Zeroes: Not Supported 00:15:12.038 Deallocated Guard Field: 0xFFFF 00:15:12.038 Flush: Supported 00:15:12.038 Reservation: Supported 00:15:12.038 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.038 Size (in LBAs): 131072 (0GiB) 00:15:12.038 Capacity (in LBAs): 131072 (0GiB) 00:15:12.038 Utilization (in LBAs): 131072 (0GiB) 00:15:12.038 NGUID: 03BDD86106F546079853CAA4FCA2327F 00:15:12.038 UUID: 03bdd861-06f5-4607-9853-caa4fca2327f 00:15:12.038 Thin Provisioning: Not Supported 00:15:12.038 Per-NS Atomic Units: Yes 00:15:12.038 Atomic Boundary Size (Normal): 0 00:15:12.038 Atomic Boundary Size (PFail): 0 00:15:12.038 Atomic Boundary Offset: 0 00:15:12.038 Maximum Single Source Range Length: 65535 00:15:12.038 Maximum Copy Length: 65535 00:15:12.038 Maximum Source Range Count: 1 00:15:12.038 NGUID/EUI64 Never Reused: No 00:15:12.038 Namespace Write Protected: No 00:15:12.038 Number of LBA Formats: 1 00:15:12.038 Current LBA Format: LBA Format #00 00:15:12.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.038 00:15:12.038 23:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.038 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.296 [2024-07-24 23:55:17.805935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.619 Initializing NVMe Controllers 00:15:17.619 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.619 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.619 Initialization complete. Launching workers. 00:15:17.619 ======================================================== 00:15:17.619 Latency(us) 00:15:17.619 Device Information : IOPS MiB/s Average min max 00:15:17.619 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36009.96 140.66 3554.08 1140.03 8523.69 00:15:17.619 ======================================================== 00:15:17.619 Total : 36009.96 140.66 3554.08 1140.03 8523.69 00:15:17.619 00:15:17.619 [2024-07-24 23:55:22.907475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.619 23:55:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.619 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.619 [2024-07-24 23:55:23.138123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.879 Initializing NVMe Controllers 00:15:22.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:22.879 Initialization complete. Launching workers. 00:15:22.879 ======================================================== 00:15:22.879 Latency(us) 00:15:22.879 Device Information : IOPS MiB/s Average min max 00:15:22.879 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33658.19 131.48 3802.32 1177.55 9562.60 00:15:22.879 ======================================================== 00:15:22.879 Total : 33658.19 131.48 3802.32 1177.55 9562.60 00:15:22.879 00:15:22.879 [2024-07-24 23:55:28.159164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.879 23:55:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.879 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.879 [2024-07-24 23:55:28.365017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.138 [2024-07-24 23:55:33.500254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.138 Initializing NVMe Controllers 00:15:28.138 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.138 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.138 Initialization complete. Launching workers. 00:15:28.138 Starting thread on core 2 00:15:28.138 Starting thread on core 3 00:15:28.138 Starting thread on core 1 00:15:28.138 23:55:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.138 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.138 [2024-07-24 23:55:33.807605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.417 [2024-07-24 23:55:36.922397] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.417 Initializing NVMe Controllers 00:15:31.417 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.417 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:31.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:31.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:31.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:31.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.417 Initialization complete. Launching workers. 00:15:31.417 Starting thread on core 1 with urgent priority queue 00:15:31.417 Starting thread on core 2 with urgent priority queue 00:15:31.417 Starting thread on core 3 with urgent priority queue 00:15:31.417 Starting thread on core 0 with urgent priority queue 00:15:31.417 SPDK bdev Controller (SPDK2 ) core 0: 2003.67 IO/s 49.91 secs/100000 ios 00:15:31.417 SPDK bdev Controller (SPDK2 ) core 1: 2051.00 IO/s 48.76 secs/100000 ios 00:15:31.417 SPDK bdev Controller (SPDK2 ) core 2: 1977.67 IO/s 50.56 secs/100000 ios 00:15:31.417 SPDK bdev Controller (SPDK2 ) core 3: 1907.67 IO/s 52.42 secs/100000 ios 00:15:31.417 ======================================================== 00:15:31.417 00:15:31.417 23:55:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.417 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.674 [2024-07-24 23:55:37.224241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.674 Initializing NVMe Controllers 00:15:31.674 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.675 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.675 Namespace ID: 1 size: 0GB 00:15:31.675 Initialization complete. 00:15:31.675 INFO: using host memory buffer for IO 00:15:31.675 Hello world! 00:15:31.675 [2024-07-24 23:55:37.236314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.675 23:55:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.675 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.932 [2024-07-24 23:55:37.516229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.305 Initializing NVMe Controllers 00:15:33.305 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.305 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.305 Initialization complete. Launching workers. 00:15:33.305 submit (in ns) avg, min, max = 9345.0, 3551.1, 4016816.7 00:15:33.305 complete (in ns) avg, min, max = 28160.4, 2047.8, 4016065.6 00:15:33.305 00:15:33.305 Submit histogram 00:15:33.305 ================ 00:15:33.305 Range in us Cumulative Count 00:15:33.305 3.532 - 3.556: 0.0299% ( 4) 00:15:33.305 3.556 - 3.579: 1.5019% ( 197) 00:15:33.305 3.579 - 3.603: 6.2393% ( 634) 00:15:33.305 3.603 - 3.627: 15.1760% ( 1196) 00:15:33.305 3.627 - 3.650: 27.9160% ( 1705) 00:15:33.305 3.650 - 3.674: 38.0259% ( 1353) 00:15:33.305 3.674 - 3.698: 46.0061% ( 1068) 00:15:33.305 3.698 - 3.721: 51.2292% ( 699) 00:15:33.305 3.721 - 3.745: 56.0039% ( 639) 00:15:33.305 3.745 - 3.769: 59.7997% ( 508) 00:15:33.305 3.769 - 3.793: 63.2892% ( 467) 00:15:33.305 3.793 - 3.816: 65.6953% ( 322) 00:15:33.305 3.816 - 3.840: 68.7813% ( 413) 00:15:33.305 3.840 - 3.864: 73.4962% ( 631) 00:15:33.306 3.864 - 3.887: 78.9659% ( 732) 00:15:33.306 3.887 - 3.911: 83.1652% ( 562) 00:15:33.306 3.911 - 3.935: 85.7057% ( 340) 00:15:33.306 3.935 - 3.959: 87.5215% ( 243) 00:15:33.306 3.959 - 3.982: 89.1654% ( 220) 00:15:33.306 3.982 - 4.006: 90.5328% ( 183) 00:15:33.306 4.006 - 4.030: 91.5116% ( 131) 00:15:33.306 4.030 - 4.053: 92.2738% ( 102) 00:15:33.306 4.053 - 4.077: 93.2302% ( 128) 00:15:33.306 4.077 - 4.101: 94.1867% ( 128) 00:15:33.306 4.101 - 4.124: 94.9189% ( 98) 00:15:33.306 4.124 - 4.148: 95.4270% ( 68) 00:15:33.306 4.148 - 4.172: 95.7110% ( 38) 00:15:33.306 4.172 - 4.196: 96.0024% ( 39) 00:15:33.306 4.196 - 4.219: 96.2789% ( 37) 00:15:33.306 4.219 - 4.243: 96.4358% ( 21) 00:15:33.306 4.243 - 4.267: 96.5329% ( 13) 00:15:33.306 4.267 - 4.290: 96.6450% ( 15) 00:15:33.306 4.290 - 4.314: 96.7646% ( 16) 00:15:33.306 4.314 - 4.338: 96.8243% ( 8) 00:15:33.306 4.338 - 4.361: 96.9065% ( 11) 00:15:33.306 4.361 - 4.385: 96.9663% ( 8) 00:15:33.306 4.385 - 4.409: 97.0037% ( 5) 00:15:33.306 4.409 - 4.433: 97.0634% ( 8) 00:15:33.306 4.433 - 4.456: 97.0933% ( 4) 00:15:33.306 4.456 - 4.480: 97.1083% ( 2) 00:15:33.306 4.480 - 4.504: 97.1157% ( 1) 00:15:33.306 4.504 - 4.527: 97.1456% ( 4) 00:15:33.306 4.527 - 4.551: 97.1606% ( 2) 00:15:33.306 4.551 - 4.575: 97.1755% ( 2) 00:15:33.306 4.575 - 4.599: 97.1830% ( 1) 00:15:33.306 4.599 - 4.622: 97.1979% ( 2) 00:15:33.306 4.622 - 4.646: 97.2054% ( 1) 00:15:33.306 4.670 - 4.693: 97.2129% ( 1) 00:15:33.306 4.741 - 4.764: 97.2353% ( 3) 00:15:33.306 4.764 - 4.788: 97.2428% ( 1) 00:15:33.306 4.788 - 4.812: 97.2727% ( 4) 00:15:33.306 4.812 - 4.836: 97.2951% ( 3) 00:15:33.306 4.836 - 4.859: 97.3324% ( 5) 00:15:33.306 4.859 - 4.883: 97.3773% ( 6) 00:15:33.306 4.883 - 4.907: 97.4296% ( 7) 00:15:33.306 4.907 - 4.930: 97.4819% ( 7) 00:15:33.306 4.930 - 4.954: 97.5118% ( 4) 00:15:33.306 4.954 - 4.978: 97.5566% ( 6) 00:15:33.306 4.978 - 5.001: 97.5940% ( 5) 00:15:33.306 5.001 - 5.025: 97.6239% ( 4) 00:15:33.306 5.025 - 5.049: 97.6463% ( 3) 00:15:33.306 5.049 - 5.073: 97.6687% ( 3) 00:15:33.306 5.073 - 5.096: 97.7060% ( 5) 00:15:33.306 5.096 - 5.120: 97.7135% ( 1) 00:15:33.306 5.120 - 5.144: 97.7584% ( 6) 00:15:33.306 5.144 - 5.167: 97.7733% ( 2) 00:15:33.306 5.167 - 5.191: 97.7957% ( 3) 00:15:33.306 5.191 - 5.215: 97.8181% ( 3) 00:15:33.306 5.215 - 5.239: 97.8405% ( 3) 00:15:33.306 5.239 - 5.262: 97.8480% ( 1) 00:15:33.306 5.262 - 5.286: 97.8630% ( 2) 00:15:33.306 5.286 - 5.310: 97.8704% ( 1) 00:15:33.306 5.310 - 5.333: 97.8854% ( 2) 00:15:33.306 5.357 - 5.381: 97.8928% ( 1) 00:15:33.306 5.381 - 5.404: 97.9003% ( 1) 00:15:33.306 5.452 - 5.476: 97.9153% ( 2) 00:15:33.306 5.476 - 5.499: 97.9227% ( 1) 00:15:33.306 5.499 - 5.523: 97.9302% ( 1) 00:15:33.306 5.570 - 5.594: 97.9377% ( 1) 00:15:33.306 5.618 - 5.641: 97.9452% ( 1) 00:15:33.306 5.902 - 5.926: 97.9601% ( 2) 00:15:33.306 5.950 - 5.973: 97.9676% ( 1) 00:15:33.306 5.997 - 6.021: 97.9825% ( 2) 00:15:33.306 6.021 - 6.044: 97.9900% ( 1) 00:15:33.306 6.044 - 6.068: 98.0049% ( 2) 00:15:33.306 6.068 - 6.116: 98.0423% ( 5) 00:15:33.306 6.116 - 6.163: 98.0572% ( 2) 00:15:33.306 6.163 - 6.210: 98.0647% ( 1) 00:15:33.306 6.210 - 6.258: 98.0722% ( 1) 00:15:33.306 6.305 - 6.353: 98.0797% ( 1) 00:15:33.306 6.353 - 6.400: 98.0871% ( 1) 00:15:33.306 6.447 - 6.495: 98.0946% ( 1) 00:15:33.306 6.495 - 6.542: 98.1021% ( 1) 00:15:33.306 6.590 - 6.637: 98.1170% ( 2) 00:15:33.306 6.637 - 6.684: 98.1245% ( 1) 00:15:33.306 6.874 - 6.921: 98.1320% ( 1) 00:15:33.306 7.016 - 7.064: 98.1469% ( 2) 00:15:33.306 7.111 - 7.159: 98.1544% ( 1) 00:15:33.306 7.159 - 7.206: 98.1618% ( 1) 00:15:33.306 7.206 - 7.253: 98.1693% ( 1) 00:15:33.306 7.301 - 7.348: 98.1917% ( 3) 00:15:33.306 7.443 - 7.490: 98.1992% ( 1) 00:15:33.306 7.490 - 7.538: 98.2142% ( 2) 00:15:33.306 7.633 - 7.680: 98.2440% ( 4) 00:15:33.306 7.680 - 7.727: 98.2515% ( 1) 00:15:33.306 7.822 - 7.870: 98.2590% ( 1) 00:15:33.306 8.059 - 8.107: 98.2665% ( 1) 00:15:33.306 8.107 - 8.154: 98.2814% ( 2) 00:15:33.306 8.154 - 8.201: 98.2963% ( 2) 00:15:33.306 8.201 - 8.249: 98.3113% ( 2) 00:15:33.306 8.249 - 8.296: 98.3262% ( 2) 00:15:33.306 8.296 - 8.344: 98.3412% ( 2) 00:15:33.306 8.344 - 8.391: 98.3636% ( 3) 00:15:33.306 8.391 - 8.439: 98.3711% ( 1) 00:15:33.306 8.439 - 8.486: 98.3860% ( 2) 00:15:33.306 8.486 - 8.533: 98.3935% ( 1) 00:15:33.306 8.581 - 8.628: 98.4159% ( 3) 00:15:33.306 8.628 - 8.676: 98.4234% ( 1) 00:15:33.306 8.676 - 8.723: 98.4383% ( 2) 00:15:33.306 8.723 - 8.770: 98.4458% ( 1) 00:15:33.306 8.770 - 8.818: 98.4533% ( 1) 00:15:33.306 8.818 - 8.865: 98.4607% ( 1) 00:15:33.306 9.055 - 9.102: 98.4682% ( 1) 00:15:33.306 9.102 - 9.150: 98.4832% ( 2) 00:15:33.306 9.150 - 9.197: 98.4906% ( 1) 00:15:33.306 9.197 - 9.244: 98.4981% ( 1) 00:15:33.306 9.339 - 9.387: 98.5056% ( 1) 00:15:33.306 9.387 - 9.434: 98.5130% ( 1) 00:15:33.306 9.529 - 9.576: 98.5205% ( 1) 00:15:33.306 9.624 - 9.671: 98.5280% ( 1) 00:15:33.306 9.671 - 9.719: 98.5429% ( 2) 00:15:33.306 9.719 - 9.766: 98.5504% ( 1) 00:15:33.306 9.766 - 9.813: 98.5579% ( 1) 00:15:33.306 9.813 - 9.861: 98.5653% ( 1) 00:15:33.306 9.956 - 10.003: 98.5728% ( 1) 00:15:33.306 10.003 - 10.050: 98.5803% ( 1) 00:15:33.306 10.098 - 10.145: 98.5878% ( 1) 00:15:33.306 10.193 - 10.240: 98.5952% ( 1) 00:15:33.306 10.335 - 10.382: 98.6027% ( 1) 00:15:33.306 10.382 - 10.430: 98.6102% ( 1) 00:15:33.306 10.430 - 10.477: 98.6251% ( 2) 00:15:33.306 10.524 - 10.572: 98.6401% ( 2) 00:15:33.306 10.667 - 10.714: 98.6475% ( 1) 00:15:33.306 10.809 - 10.856: 98.6550% ( 1) 00:15:33.306 11.093 - 11.141: 98.6625% ( 1) 00:15:33.306 11.141 - 11.188: 98.6700% ( 1) 00:15:33.306 11.236 - 11.283: 98.6774% ( 1) 00:15:33.306 11.520 - 11.567: 98.6849% ( 1) 00:15:33.306 11.804 - 11.852: 98.6998% ( 2) 00:15:33.306 11.852 - 11.899: 98.7073% ( 1) 00:15:33.306 11.994 - 12.041: 98.7148% ( 1) 00:15:33.306 12.041 - 12.089: 98.7223% ( 1) 00:15:33.306 12.136 - 12.231: 98.7297% ( 1) 00:15:33.306 12.326 - 12.421: 98.7521% ( 3) 00:15:33.306 12.421 - 12.516: 98.7596% ( 1) 00:15:33.306 12.895 - 12.990: 98.7746% ( 2) 00:15:33.306 13.274 - 13.369: 98.7895% ( 2) 00:15:33.306 13.369 - 13.464: 98.7970% ( 1) 00:15:33.306 13.559 - 13.653: 98.8045% ( 1) 00:15:33.306 13.748 - 13.843: 98.8194% ( 2) 00:15:33.306 13.843 - 13.938: 98.8269% ( 1) 00:15:33.306 14.033 - 14.127: 98.8343% ( 1) 00:15:33.306 14.222 - 14.317: 98.8418% ( 1) 00:15:33.306 14.317 - 14.412: 98.8493% ( 1) 00:15:33.306 14.412 - 14.507: 98.8568% ( 1) 00:15:33.306 14.507 - 14.601: 98.8717% ( 2) 00:15:33.306 15.360 - 15.455: 98.8792% ( 1) 00:15:33.306 16.972 - 17.067: 98.8866% ( 1) 00:15:33.306 17.067 - 17.161: 98.9016% ( 2) 00:15:33.306 17.161 - 17.256: 98.9091% ( 1) 00:15:33.306 17.256 - 17.351: 98.9240% ( 2) 00:15:33.306 17.351 - 17.446: 98.9390% ( 2) 00:15:33.306 17.446 - 17.541: 98.9539% ( 2) 00:15:33.306 17.541 - 17.636: 99.0286% ( 10) 00:15:33.306 17.636 - 17.730: 99.0735% ( 6) 00:15:33.306 17.730 - 17.825: 99.0959% ( 3) 00:15:33.306 17.825 - 17.920: 99.1183% ( 3) 00:15:33.306 17.920 - 18.015: 99.1631% ( 6) 00:15:33.306 18.015 - 18.110: 99.2229% ( 8) 00:15:33.306 18.110 - 18.204: 99.3051% ( 11) 00:15:33.306 18.204 - 18.299: 99.3798% ( 10) 00:15:33.306 18.299 - 18.394: 99.4545% ( 10) 00:15:33.306 18.394 - 18.489: 99.5367% ( 11) 00:15:33.306 18.489 - 18.584: 99.6114% ( 10) 00:15:33.306 18.584 - 18.679: 99.6563% ( 6) 00:15:33.306 18.679 - 18.773: 99.6936% ( 5) 00:15:33.306 18.773 - 18.868: 99.7459% ( 7) 00:15:33.306 18.868 - 18.963: 99.7534% ( 1) 00:15:33.306 18.963 - 19.058: 99.7684% ( 2) 00:15:33.306 19.058 - 19.153: 99.7758% ( 1) 00:15:33.306 19.153 - 19.247: 99.7908% ( 2) 00:15:33.306 19.532 - 19.627: 99.8057% ( 2) 00:15:33.307 19.911 - 20.006: 99.8132% ( 1) 00:15:33.307 20.101 - 20.196: 99.8207% ( 1) 00:15:33.307 20.575 - 20.670: 99.8281% ( 1) 00:15:33.307 21.807 - 21.902: 99.8356% ( 1) 00:15:33.307 22.471 - 22.566: 99.8431% ( 1) 00:15:33.307 24.462 - 24.652: 99.8506% ( 1) 00:15:33.307 24.841 - 25.031: 99.8580% ( 1) 00:15:33.307 29.013 - 29.203: 99.8655% ( 1) 00:15:33.307 3980.705 - 4004.978: 99.9851% ( 16) 00:15:33.307 4004.978 - 4029.250: 100.0000% ( 2) 00:15:33.307 00:15:33.307 Complete histogram 00:15:33.307 ================== 00:15:33.307 Range in us Cumulative Count 00:15:33.307 2.039 - 2.050: 0.1793% ( 24) 00:15:33.307 2.050 - 2.062: 23.7764% ( 3158) 00:15:33.307 2.062 - 2.074: 42.2177% ( 2468) 00:15:33.307 2.074 - 2.086: 45.3262% ( 416) 00:15:33.307 2.086 - 2.098: 55.4584% ( 1356) 00:15:33.307 2.098 - 2.110: 60.8234% ( 718) 00:15:33.307 2.110 - 2.121: 63.6330% ( 376) 00:15:33.307 2.121 - 2.133: 73.1675% ( 1276) 00:15:33.307 2.133 - 2.145: 76.2086% ( 407) 00:15:33.307 2.145 - 2.157: 77.5686% ( 182) 00:15:33.307 2.157 - 2.169: 81.1104% ( 474) 00:15:33.307 2.169 - 2.181: 82.3209% ( 162) 00:15:33.307 2.181 - 2.193: 83.5015% ( 158) 00:15:33.307 2.193 - 2.204: 87.5663% ( 544) 00:15:33.307 2.204 - 2.216: 89.8827% ( 310) 00:15:33.307 2.216 - 2.228: 91.6013% ( 230) 00:15:33.307 2.228 - 2.240: 93.0509% ( 194) 00:15:33.307 2.240 - 2.252: 93.6412% ( 79) 00:15:33.307 2.252 - 2.264: 94.0372% ( 53) 00:15:33.307 2.264 - 2.276: 94.3062% ( 36) 00:15:33.307 2.276 - 2.287: 95.0833% ( 104) 00:15:33.307 2.287 - 2.299: 95.5466% ( 62) 00:15:33.307 2.299 - 2.311: 95.6960% ( 20) 00:15:33.307 2.311 - 2.323: 95.7782% ( 11) 00:15:33.307 2.323 - 2.335: 95.8380% ( 8) 00:15:33.307 2.335 - 2.347: 95.8978% ( 8) 00:15:33.307 2.347 - 2.359: 96.0248% ( 17) 00:15:33.307 2.359 - 2.370: 96.3237% ( 40) 00:15:33.307 2.370 - 2.382: 96.5628% ( 32) 00:15:33.307 2.382 - 2.394: 96.8617% ( 40) 00:15:33.307 2.394 - 2.406: 97.0111% ( 20) 00:15:33.307 2.406 - 2.418: 97.1979% ( 25) 00:15:33.307 2.418 - 2.430: 97.3399% ( 19) 00:15:33.307 2.430 - 2.441: 97.5417% ( 27) 00:15:33.307 2.441 - 2.453: 97.6239% ( 11) 00:15:33.307 2.453 - 2.465: 97.7733% ( 20) 00:15:33.307 2.465 - 2.477: 97.9750% ( 27) 00:15:33.307 2.477 - 2.489: 98.0572% ( 11) 00:15:33.307 2.489 - 2.501: 98.1245% ( 9) 00:15:33.307 2.501 - 2.513: 98.1992% ( 10) 00:15:33.307 2.513 - 2.524: 98.2515% ( 7) 00:15:33.307 2.524 - 2.536: 98.3337% ( 11) 00:15:33.307 2.536 - 2.548: 98.3487% ( 2) 00:15:33.307 2.548 - 2.560: 98.3636% ( 2) 00:15:33.307 2.560 - 2.572: 98.3785% ( 2) 00:15:33.307 2.572 - 2.584: 98.3860% ( 1) 00:15:33.307 2.584 - 2.596: 98.3935% ( 1) 00:15:33.307 2.607 - 2.619: 98.4084% ( 2) 00:15:33.307 2.619 - 2.631: 98.4159% ( 1) 00:15:33.307 2.631 - 2.643: 9[2024-07-24 23:55:38.612891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.307 8.4234% ( 1) 00:15:33.307 2.643 - 2.655: 98.4308% ( 1) 00:15:33.307 2.667 - 2.679: 98.4383% ( 1) 00:15:33.307 2.702 - 2.714: 98.4533% ( 2) 00:15:33.307 2.726 - 2.738: 98.4607% ( 1) 00:15:33.307 2.750 - 2.761: 98.4682% ( 1) 00:15:33.307 3.366 - 3.390: 98.4757% ( 1) 00:15:33.307 3.390 - 3.413: 98.4981% ( 3) 00:15:33.307 3.437 - 3.461: 98.5056% ( 1) 00:15:33.307 3.461 - 3.484: 98.5130% ( 1) 00:15:33.307 3.556 - 3.579: 98.5205% ( 1) 00:15:33.307 3.627 - 3.650: 98.5280% ( 1) 00:15:33.307 3.650 - 3.674: 98.5355% ( 1) 00:15:33.307 3.769 - 3.793: 98.5429% ( 1) 00:15:33.307 3.816 - 3.840: 98.5653% ( 3) 00:15:33.307 3.911 - 3.935: 98.5728% ( 1) 00:15:33.307 3.982 - 4.006: 98.5803% ( 1) 00:15:33.307 4.006 - 4.030: 98.5878% ( 1) 00:15:33.307 5.191 - 5.215: 98.5952% ( 1) 00:15:33.307 5.452 - 5.476: 98.6027% ( 1) 00:15:33.307 5.641 - 5.665: 98.6102% ( 1) 00:15:33.307 5.713 - 5.736: 98.6176% ( 1) 00:15:33.307 5.879 - 5.902: 98.6251% ( 1) 00:15:33.307 5.902 - 5.926: 98.6326% ( 1) 00:15:33.307 6.021 - 6.044: 98.6401% ( 1) 00:15:33.307 6.068 - 6.116: 98.6475% ( 1) 00:15:33.307 6.305 - 6.353: 98.6550% ( 1) 00:15:33.307 6.353 - 6.400: 98.6625% ( 1) 00:15:33.307 6.495 - 6.542: 98.6700% ( 1) 00:15:33.307 6.542 - 6.590: 98.6849% ( 2) 00:15:33.307 6.732 - 6.779: 98.6924% ( 1) 00:15:33.307 6.779 - 6.827: 98.6998% ( 1) 00:15:33.307 7.111 - 7.159: 98.7073% ( 1) 00:15:33.307 7.159 - 7.206: 98.7148% ( 1) 00:15:33.307 7.396 - 7.443: 98.7223% ( 1) 00:15:33.307 7.443 - 7.490: 98.7297% ( 1) 00:15:33.307 7.822 - 7.870: 98.7372% ( 1) 00:15:33.307 8.723 - 8.770: 98.7447% ( 1) 00:15:33.307 9.529 - 9.576: 98.7521% ( 1) 00:15:33.307 9.861 - 9.908: 98.7596% ( 1) 00:15:33.307 15.360 - 15.455: 98.7671% ( 1) 00:15:33.307 15.455 - 15.550: 98.7746% ( 1) 00:15:33.307 15.550 - 15.644: 98.7895% ( 2) 00:15:33.307 15.739 - 15.834: 98.8045% ( 2) 00:15:33.307 15.834 - 15.929: 98.8194% ( 2) 00:15:33.307 15.929 - 16.024: 98.8418% ( 3) 00:15:33.307 16.024 - 16.119: 98.8792% ( 5) 00:15:33.307 16.119 - 16.213: 98.9390% ( 8) 00:15:33.307 16.213 - 16.308: 98.9763% ( 5) 00:15:33.307 16.308 - 16.403: 99.0062% ( 4) 00:15:33.307 16.403 - 16.498: 99.0585% ( 7) 00:15:33.307 16.498 - 16.593: 99.1033% ( 6) 00:15:33.307 16.593 - 16.687: 99.1482% ( 6) 00:15:33.307 16.687 - 16.782: 99.1930% ( 6) 00:15:33.307 16.782 - 16.877: 99.2378% ( 6) 00:15:33.307 16.877 - 16.972: 99.2528% ( 2) 00:15:33.307 16.972 - 17.067: 99.2603% ( 1) 00:15:33.307 17.256 - 17.351: 99.2976% ( 5) 00:15:33.307 17.351 - 17.446: 99.3051% ( 1) 00:15:33.307 17.730 - 17.825: 99.3126% ( 1) 00:15:33.307 18.299 - 18.394: 99.3200% ( 1) 00:15:33.307 18.489 - 18.584: 99.3350% ( 2) 00:15:33.307 27.496 - 27.686: 99.3424% ( 1) 00:15:33.307 46.649 - 46.839: 99.3499% ( 1) 00:15:33.307 3203.982 - 3228.255: 99.3574% ( 1) 00:15:33.307 3980.705 - 4004.978: 99.9029% ( 73) 00:15:33.307 4004.978 - 4029.250: 100.0000% ( 13) 00:15:33.307 00:15:33.307 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.307 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.307 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.307 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.307 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.307 [ 00:15:33.307 { 00:15:33.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.307 "subtype": "Discovery", 00:15:33.307 "listen_addresses": [], 00:15:33.307 "allow_any_host": true, 00:15:33.307 "hosts": [] 00:15:33.307 }, 00:15:33.307 { 00:15:33.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.307 "subtype": "NVMe", 00:15:33.307 "listen_addresses": [ 00:15:33.307 { 00:15:33.307 "trtype": "VFIOUSER", 00:15:33.307 "adrfam": "IPv4", 00:15:33.307 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.307 "trsvcid": "0" 00:15:33.307 } 00:15:33.307 ], 00:15:33.307 "allow_any_host": true, 00:15:33.307 "hosts": [], 00:15:33.307 "serial_number": "SPDK1", 00:15:33.307 "model_number": "SPDK bdev Controller", 00:15:33.307 "max_namespaces": 32, 00:15:33.307 "min_cntlid": 1, 00:15:33.307 "max_cntlid": 65519, 00:15:33.307 "namespaces": [ 00:15:33.307 { 00:15:33.307 "nsid": 1, 00:15:33.307 "bdev_name": "Malloc1", 00:15:33.307 "name": "Malloc1", 00:15:33.307 "nguid": "CA356D1F43534E59A84493E7CDCAF490", 00:15:33.307 "uuid": "ca356d1f-4353-4e59-a844-93e7cdcaf490" 00:15:33.307 }, 00:15:33.307 { 00:15:33.307 "nsid": 2, 00:15:33.307 "bdev_name": "Malloc3", 00:15:33.307 "name": "Malloc3", 00:15:33.307 "nguid": "9A2A03C1FF70418BB990590170AAE89C", 00:15:33.307 "uuid": "9a2a03c1-ff70-418b-b990-590170aae89c" 00:15:33.307 } 00:15:33.307 ] 00:15:33.307 }, 00:15:33.307 { 00:15:33.307 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.307 "subtype": "NVMe", 00:15:33.307 "listen_addresses": [ 00:15:33.307 { 00:15:33.307 "trtype": "VFIOUSER", 00:15:33.307 "adrfam": "IPv4", 00:15:33.307 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.308 "trsvcid": "0" 00:15:33.308 } 00:15:33.308 ], 00:15:33.308 "allow_any_host": true, 00:15:33.308 "hosts": [], 00:15:33.308 "serial_number": "SPDK2", 00:15:33.308 "model_number": "SPDK bdev Controller", 00:15:33.308 "max_namespaces": 32, 00:15:33.308 "min_cntlid": 1, 00:15:33.308 "max_cntlid": 65519, 00:15:33.308 "namespaces": [ 00:15:33.308 { 00:15:33.308 "nsid": 1, 00:15:33.308 "bdev_name": "Malloc2", 00:15:33.308 "name": "Malloc2", 00:15:33.308 "nguid": "03BDD86106F546079853CAA4FCA2327F", 00:15:33.308 "uuid": "03bdd861-06f5-4607-9853-caa4fca2327f" 00:15:33.308 } 00:15:33.308 ] 00:15:33.308 } 00:15:33.308 ] 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=752891 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.308 23:55:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:33.308 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.565 [2024-07-24 23:55:39.048619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.565 Malloc4 00:15:33.565 23:55:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:33.822 [2024-07-24 23:55:39.420535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.822 23:55:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.822 Asynchronous Event Request test 00:15:33.822 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.822 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.822 Registering asynchronous event callbacks... 00:15:33.822 Starting namespace attribute notice tests for all controllers... 00:15:33.822 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.822 aer_cb - Changed Namespace 00:15:33.822 Cleaning up... 00:15:34.079 [ 00:15:34.079 { 00:15:34.079 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:34.079 "subtype": "Discovery", 00:15:34.079 "listen_addresses": [], 00:15:34.079 "allow_any_host": true, 00:15:34.079 "hosts": [] 00:15:34.079 }, 00:15:34.079 { 00:15:34.079 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:34.079 "subtype": "NVMe", 00:15:34.079 "listen_addresses": [ 00:15:34.079 { 00:15:34.079 "trtype": "VFIOUSER", 00:15:34.079 "adrfam": "IPv4", 00:15:34.079 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:34.079 "trsvcid": "0" 00:15:34.079 } 00:15:34.079 ], 00:15:34.079 "allow_any_host": true, 00:15:34.079 "hosts": [], 00:15:34.079 "serial_number": "SPDK1", 00:15:34.079 "model_number": "SPDK bdev Controller", 00:15:34.079 "max_namespaces": 32, 00:15:34.079 "min_cntlid": 1, 00:15:34.079 "max_cntlid": 65519, 00:15:34.079 "namespaces": [ 00:15:34.079 { 00:15:34.079 "nsid": 1, 00:15:34.079 "bdev_name": "Malloc1", 00:15:34.079 "name": "Malloc1", 00:15:34.079 "nguid": "CA356D1F43534E59A84493E7CDCAF490", 00:15:34.079 "uuid": "ca356d1f-4353-4e59-a844-93e7cdcaf490" 00:15:34.079 }, 00:15:34.079 { 00:15:34.079 "nsid": 2, 00:15:34.079 "bdev_name": "Malloc3", 00:15:34.079 "name": "Malloc3", 00:15:34.079 "nguid": "9A2A03C1FF70418BB990590170AAE89C", 00:15:34.079 "uuid": "9a2a03c1-ff70-418b-b990-590170aae89c" 00:15:34.079 } 00:15:34.079 ] 00:15:34.079 }, 00:15:34.079 { 00:15:34.079 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:34.079 "subtype": "NVMe", 00:15:34.079 "listen_addresses": [ 00:15:34.079 { 00:15:34.079 "trtype": "VFIOUSER", 00:15:34.079 "adrfam": "IPv4", 00:15:34.079 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:34.079 "trsvcid": "0" 00:15:34.079 } 00:15:34.079 ], 00:15:34.079 "allow_any_host": true, 00:15:34.079 "hosts": [], 00:15:34.079 "serial_number": "SPDK2", 00:15:34.079 "model_number": "SPDK bdev Controller", 00:15:34.079 "max_namespaces": 32, 00:15:34.079 "min_cntlid": 1, 00:15:34.079 "max_cntlid": 65519, 00:15:34.079 "namespaces": [ 00:15:34.079 { 00:15:34.079 "nsid": 1, 00:15:34.079 "bdev_name": "Malloc2", 00:15:34.079 "name": "Malloc2", 00:15:34.079 "nguid": "03BDD86106F546079853CAA4FCA2327F", 00:15:34.079 "uuid": "03bdd861-06f5-4607-9853-caa4fca2327f" 00:15:34.079 }, 00:15:34.079 { 00:15:34.079 "nsid": 2, 00:15:34.079 "bdev_name": "Malloc4", 00:15:34.079 "name": "Malloc4", 00:15:34.079 "nguid": "564F55441F354F9289220AB8C439AE9E", 00:15:34.079 "uuid": "564f5544-1f35-4f92-8922-0ab8c439ae9e" 00:15:34.079 } 00:15:34.079 ] 00:15:34.079 } 00:15:34.079 ] 00:15:34.079 23:55:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 752891 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 747303 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 747303 ']' 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 747303 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 747303 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 747303' 00:15:34.080 killing process with pid 747303 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 747303 00:15:34.080 23:55:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 747303 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=753033 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 753033' 00:15:34.645 Process pid: 753033 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 753033 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 753033 ']' 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:34.645 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.646 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:34.646 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.646 [2024-07-24 23:55:40.121513] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:34.646 [2024-07-24 23:55:40.122556] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:34.646 [2024-07-24 23:55:40.122632] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.646 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.646 [2024-07-24 23:55:40.181062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.646 [2024-07-24 23:55:40.268411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.646 [2024-07-24 23:55:40.268464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.646 [2024-07-24 23:55:40.268487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.646 [2024-07-24 23:55:40.268498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.646 [2024-07-24 23:55:40.268507] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.646 [2024-07-24 23:55:40.268586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.646 [2024-07-24 23:55:40.268651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.646 [2024-07-24 23:55:40.268717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.646 [2024-07-24 23:55:40.268720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.903 [2024-07-24 23:55:40.362579] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:34.903 [2024-07-24 23:55:40.362809] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:34.903 [2024-07-24 23:55:40.363067] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:34.904 [2024-07-24 23:55:40.363667] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:34.904 [2024-07-24 23:55:40.363933] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:34.904 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:34.904 23:55:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:34.904 23:55:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.835 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:36.092 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.092 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.092 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.092 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.092 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.351 Malloc1 00:15:36.351 23:55:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.610 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:36.867 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:37.124 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.124 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:37.124 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.381 Malloc2 00:15:37.381 23:55:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:37.638 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:37.894 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 753033 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 753033 ']' 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 753033 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 753033 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 753033' 00:15:38.151 killing process with pid 753033 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 753033 00:15:38.151 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 753033 00:15:38.409 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:38.409 23:55:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:38.409 00:15:38.409 real 0m52.292s 00:15:38.409 user 3m26.619s 00:15:38.409 sys 0m4.280s 00:15:38.409 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:38.409 23:55:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:38.409 ************************************ 00:15:38.409 END TEST nvmf_vfio_user 00:15:38.409 ************************************ 00:15:38.409 23:55:43 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:38.409 23:55:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:38.409 23:55:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.409 23:55:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.409 ************************************ 00:15:38.409 START TEST nvmf_vfio_user_nvme_compliance 00:15:38.409 ************************************ 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:38.409 * Looking for test storage... 00:15:38.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:38.409 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=753579 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 753579' 00:15:38.410 Process pid: 753579 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 753579 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 753579 ']' 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.410 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.668 [2024-07-24 23:55:44.126806] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:38.668 [2024-07-24 23:55:44.126883] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.668 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.668 [2024-07-24 23:55:44.184823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.668 [2024-07-24 23:55:44.269678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.668 [2024-07-24 23:55:44.269731] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.668 [2024-07-24 23:55:44.269755] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.668 [2024-07-24 23:55:44.269766] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.668 [2024-07-24 23:55:44.269777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.668 [2024-07-24 23:55:44.269837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.668 [2024-07-24 23:55:44.269894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.668 [2024-07-24 23:55:44.269896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.668 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.668 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:38.668 23:55:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:40.038 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.038 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:40.038 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:40.038 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.038 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.039 malloc0 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.039 23:55:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:40.039 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.039 00:15:40.039 00:15:40.039 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.039 http://cunit.sourceforge.net/ 00:15:40.039 00:15:40.039 00:15:40.039 Suite: nvme_compliance 00:15:40.039 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 23:55:45.611802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.039 [2024-07-24 23:55:45.613308] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:40.039 [2024-07-24 23:55:45.613334] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:40.039 [2024-07-24 23:55:45.613348] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:40.039 [2024-07-24 23:55:45.614822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.039 passed 00:15:40.039 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 23:55:45.700424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.039 [2024-07-24 23:55:45.705446] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.039 passed 00:15:40.296 Test: admin_identify_ns ...[2024-07-24 23:55:45.793688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.296 [2024-07-24 23:55:45.853117] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:40.296 [2024-07-24 23:55:45.861134] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:40.296 [2024-07-24 23:55:45.882249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.296 passed 00:15:40.296 Test: admin_get_features_mandatory_features ...[2024-07-24 23:55:45.965872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.296 [2024-07-24 23:55:45.968892] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.296 passed 00:15:40.553 Test: admin_get_features_optional_features ...[2024-07-24 23:55:46.054478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.553 [2024-07-24 23:55:46.059513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.553 passed 00:15:40.553 Test: admin_set_features_number_of_queues ...[2024-07-24 23:55:46.143661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.553 [2024-07-24 23:55:46.248227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.811 passed 00:15:40.811 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 23:55:46.332170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.811 [2024-07-24 23:55:46.335192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.811 passed 00:15:40.811 Test: admin_get_log_page_with_lpo ...[2024-07-24 23:55:46.420384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.811 [2024-07-24 23:55:46.492121] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:40.811 [2024-07-24 23:55:46.505192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.068 passed 00:15:41.068 Test: fabric_property_get ...[2024-07-24 23:55:46.589273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.068 [2024-07-24 23:55:46.590566] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.068 [2024-07-24 23:55:46.592302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.068 passed 00:15:41.068 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 23:55:46.678887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.068 [2024-07-24 23:55:46.680215] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:41.068 [2024-07-24 23:55:46.681908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.068 passed 00:15:41.068 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 23:55:46.766619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.324 [2024-07-24 23:55:46.850112] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.324 [2024-07-24 23:55:46.866112] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.324 [2024-07-24 23:55:46.871215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.324 passed 00:15:41.324 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 23:55:46.954861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.324 [2024-07-24 23:55:46.956188] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:41.324 [2024-07-24 23:55:46.957883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.324 passed 00:15:41.582 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 23:55:47.043664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.582 [2024-07-24 23:55:47.119124] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.582 [2024-07-24 23:55:47.143131] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.582 [2024-07-24 23:55:47.148210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.582 passed 00:15:41.582 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 23:55:47.232068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.582 [2024-07-24 23:55:47.233399] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:41.582 [2024-07-24 23:55:47.233453] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:41.582 [2024-07-24 23:55:47.235108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.582 passed 00:15:41.878 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 23:55:47.323073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.878 [2024-07-24 23:55:47.417134] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:41.878 [2024-07-24 23:55:47.425126] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:41.878 [2024-07-24 23:55:47.433130] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:41.878 [2024-07-24 23:55:47.441130] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:41.878 [2024-07-24 23:55:47.470216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.878 passed 00:15:41.878 Test: admin_create_io_sq_verify_pc ...[2024-07-24 23:55:47.555317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.878 [2024-07-24 23:55:47.572129] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.136 [2024-07-24 23:55:47.589627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.136 passed 00:15:42.136 Test: admin_create_io_qp_max_qps ...[2024-07-24 23:55:47.672190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.067 [2024-07-24 23:55:48.775119] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:43.631 [2024-07-24 23:55:49.159838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.631 passed 00:15:43.631 Test: admin_create_io_sq_shared_cq ...[2024-07-24 23:55:49.242186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.889 [2024-07-24 23:55:49.375126] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.889 [2024-07-24 23:55:49.412195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.889 passed 00:15:43.889 00:15:43.889 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.889 suites 1 1 n/a 0 0 00:15:43.889 tests 18 18 18 0 0 00:15:43.889 asserts 360 360 360 0 n/a 00:15:43.889 00:15:43.889 Elapsed time = 1.579 seconds 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 753579 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 753579 ']' 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 753579 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 753579 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 753579' 00:15:43.889 killing process with pid 753579 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 753579 00:15:43.889 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 753579 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:44.147 00:15:44.147 real 0m5.725s 00:15:44.147 user 0m16.123s 00:15:44.147 sys 0m0.540s 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.147 ************************************ 00:15:44.147 END TEST nvmf_vfio_user_nvme_compliance 00:15:44.147 ************************************ 00:15:44.147 23:55:49 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.147 23:55:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:44.147 23:55:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:44.147 23:55:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:44.147 ************************************ 00:15:44.147 START TEST nvmf_vfio_user_fuzz 00:15:44.147 ************************************ 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.147 * Looking for test storage... 00:15:44.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.147 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=754353 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 754353' 00:15:44.148 Process pid: 754353 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 754353 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 754353 ']' 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:44.148 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.406 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:44.406 23:55:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.663 23:55:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.663 23:55:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:44.663 23:55:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.594 malloc0 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:45.594 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:45.595 23:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:17.651 Fuzzing completed. Shutting down the fuzz application 00:16:17.651 00:16:17.651 Dumping successful admin opcodes: 00:16:17.651 8, 9, 10, 24, 00:16:17.651 Dumping successful io opcodes: 00:16:17.651 0, 00:16:17.652 NS: 0x200003a1ef00 I/O qp, Total commands completed: 590901, total successful commands: 2280, random_seed: 500411776 00:16:17.652 NS: 0x200003a1ef00 admin qp, Total commands completed: 76986, total successful commands: 596, random_seed: 3780732608 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 754353 ']' 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 754353' 00:16:17.652 killing process with pid 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 754353 00:16:17.652 23:56:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:17.652 23:56:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:17.652 00:16:17.652 real 0m32.230s 00:16:17.652 user 0m31.808s 00:16:17.652 sys 0m28.484s 00:16:17.652 23:56:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:17.652 23:56:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.652 ************************************ 00:16:17.652 END TEST nvmf_vfio_user_fuzz 00:16:17.652 ************************************ 00:16:17.652 23:56:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:17.652 23:56:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:17.652 23:56:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:17.652 23:56:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.652 ************************************ 00:16:17.652 START TEST nvmf_host_management 00:16:17.652 ************************************ 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:17.652 * Looking for test storage... 00:16:17.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.652 23:56:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:18.586 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:18.586 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:18.586 Found net devices under 0000:09:00.0: cvl_0_0 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.586 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:18.587 Found net devices under 0000:09:00.1: cvl_0_1 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:18.587 00:16:18.587 --- 10.0.0.2 ping statistics --- 00:16:18.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.587 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:16:18.587 00:16:18.587 --- 10.0.0.1 ping statistics --- 00:16:18.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.587 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=759708 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 759708 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 759708 ']' 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.587 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:18.587 [2024-07-24 23:56:24.259850] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:18.587 [2024-07-24 23:56:24.259929] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.587 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.845 [2024-07-24 23:56:24.330873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.845 [2024-07-24 23:56:24.430768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.845 [2024-07-24 23:56:24.430824] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.845 [2024-07-24 23:56:24.430841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.845 [2024-07-24 23:56:24.430853] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.845 [2024-07-24 23:56:24.430864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.845 [2024-07-24 23:56:24.430946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.845 [2024-07-24 23:56:24.430985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.845 [2024-07-24 23:56:24.431059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:18.845 [2024-07-24 23:56:24.431062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.845 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.845 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:18.845 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.845 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.845 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 [2024-07-24 23:56:24.585839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 Malloc0 00:16:19.103 [2024-07-24 23:56:24.647060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=759864 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 759864 /var/tmp/bdevperf.sock 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 759864 ']' 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.103 { 00:16:19.103 "params": { 00:16:19.103 "name": "Nvme$subsystem", 00:16:19.103 "trtype": "$TEST_TRANSPORT", 00:16:19.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.103 "adrfam": "ipv4", 00:16:19.103 "trsvcid": "$NVMF_PORT", 00:16:19.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.103 "hdgst": ${hdgst:-false}, 00:16:19.103 "ddgst": ${ddgst:-false} 00:16:19.103 }, 00:16:19.103 "method": "bdev_nvme_attach_controller" 00:16:19.103 } 00:16:19.103 EOF 00:16:19.103 )") 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:19.103 23:56:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.103 "params": { 00:16:19.103 "name": "Nvme0", 00:16:19.103 "trtype": "tcp", 00:16:19.103 "traddr": "10.0.0.2", 00:16:19.103 "adrfam": "ipv4", 00:16:19.103 "trsvcid": "4420", 00:16:19.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:19.103 "hdgst": false, 00:16:19.103 "ddgst": false 00:16:19.103 }, 00:16:19.103 "method": "bdev_nvme_attach_controller" 00:16:19.103 }' 00:16:19.103 [2024-07-24 23:56:24.725526] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:19.103 [2024-07-24 23:56:24.725615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759864 ] 00:16:19.103 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.103 [2024-07-24 23:56:24.785884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.361 [2024-07-24 23:56:24.874157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.361 Running I/O for 10 seconds... 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=11 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 11 -ge 100 ']' 00:16:19.621 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.878 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.878 [2024-07-24 23:56:25.423974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.878 [2024-07-24 23:56:25.424023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.878 [2024-07-24 23:56:25.424062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.878 [2024-07-24 23:56:25.424089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.878 [2024-07-24 23:56:25.424130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468f00 is same with the state(5) to be set 00:16:19.878 [2024-07-24 23:56:25.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.878 [2024-07-24 23:56:25.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.878 [2024-07-24 23:56:25.424933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.424949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.424964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.424981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.424996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.879 [2024-07-24 23:56:25.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.425971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.425988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.426003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.426020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:19.879 [2024-07-24 23:56:25.426036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.426052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.426068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.426084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.426099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.879 [2024-07-24 23:56:25.426123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.879 [2024-07-24 23:56:25.426154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.880 [2024-07-24 23:56:25.426187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:19.880 [2024-07-24 23:56:25.426289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.880 [2024-07-24 23:56:25.426904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.880 [2024-07-24 23:56:25.426997] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1463330 was disconnected and freed. reset controller. 00:16:19.880 [2024-07-24 23:56:25.428132] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:19.880 task offset: 65536 on job bdev=Nvme0n1 fails 00:16:19.880 00:16:19.880 Latency(us) 00:16:19.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.880 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:19.880 Job: Nvme0n1 ended in about 0.38 seconds with error 00:16:19.880 Verification LBA range: start 0x0 length 0x400 00:16:19.880 Nvme0n1 : 0.38 1360.14 85.01 170.02 0.00 40607.64 2718.53 39030.33 00:16:19.880 =================================================================================================================== 00:16:19.880 Total : 1360.14 85.01 170.02 0.00 40607.64 2718.53 39030.33 00:16:19.880 [2024-07-24 23:56:25.429990] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:19.880 [2024-07-24 23:56:25.430020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1468f00 (9): Bad file descriptor 00:16:19.880 23:56:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.880 23:56:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:19.880 [2024-07-24 23:56:25.534275] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 759864 00:16:20.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (759864) - No such process 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:20.812 { 00:16:20.812 "params": { 00:16:20.812 "name": "Nvme$subsystem", 00:16:20.812 "trtype": "$TEST_TRANSPORT", 00:16:20.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.812 "adrfam": "ipv4", 00:16:20.812 "trsvcid": "$NVMF_PORT", 00:16:20.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.812 "hdgst": ${hdgst:-false}, 00:16:20.812 "ddgst": ${ddgst:-false} 00:16:20.812 }, 00:16:20.812 "method": "bdev_nvme_attach_controller" 00:16:20.812 } 00:16:20.812 EOF 00:16:20.812 )") 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:20.812 23:56:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:20.812 "params": { 00:16:20.812 "name": "Nvme0", 00:16:20.812 "trtype": "tcp", 00:16:20.812 "traddr": "10.0.0.2", 00:16:20.812 "adrfam": "ipv4", 00:16:20.812 "trsvcid": "4420", 00:16:20.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:20.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:20.812 "hdgst": false, 00:16:20.812 "ddgst": false 00:16:20.812 }, 00:16:20.812 "method": "bdev_nvme_attach_controller" 00:16:20.812 }' 00:16:20.812 [2024-07-24 23:56:26.477918] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:20.812 [2024-07-24 23:56:26.477999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760030 ] 00:16:20.812 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.069 [2024-07-24 23:56:26.538026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.069 [2024-07-24 23:56:26.623767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.327 Running I/O for 1 seconds... 00:16:22.697 00:16:22.697 Latency(us) 00:16:22.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.697 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:22.697 Verification LBA range: start 0x0 length 0x400 00:16:22.697 Nvme0n1 : 1.04 1481.08 92.57 0.00 0.00 42558.06 9854.67 36505.98 00:16:22.697 =================================================================================================================== 00:16:22.697 Total : 1481.08 92.57 0.00 0.00 42558.06 9854.67 36505.98 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.697 rmmod nvme_tcp 00:16:22.697 rmmod nvme_fabrics 00:16:22.697 rmmod nvme_keyring 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:22.697 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 759708 ']' 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 759708 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 759708 ']' 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 759708 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 759708 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 759708' 00:16:22.698 killing process with pid 759708 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 759708 00:16:22.698 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 759708 00:16:22.956 [2024-07-24 23:56:28.501592] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.956 23:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.858 23:56:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:25.116 23:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:25.116 00:16:25.116 real 0m8.510s 00:16:25.116 user 0m19.084s 00:16:25.116 sys 0m2.634s 00:16:25.116 23:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:25.116 23:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.116 ************************************ 00:16:25.116 END TEST nvmf_host_management 00:16:25.116 ************************************ 00:16:25.116 23:56:30 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:25.116 23:56:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:25.116 23:56:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:25.116 23:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.116 ************************************ 00:16:25.116 START TEST nvmf_lvol 00:16:25.116 ************************************ 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:25.116 * Looking for test storage... 00:16:25.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.116 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.117 23:56:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.016 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:27.017 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:27.017 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:27.017 Found net devices under 0000:09:00.0: cvl_0_0 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:27.017 Found net devices under 0000:09:00.1: cvl_0_1 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:16:27.017 00:16:27.017 --- 10.0.0.2 ping statistics --- 00:16:27.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.017 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:27.017 00:16:27.017 --- 10.0.0.1 ping statistics --- 00:16:27.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.017 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=762221 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 762221 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 762221 ']' 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.017 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.017 [2024-07-24 23:56:32.730917] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:27.017 [2024-07-24 23:56:32.731011] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.304 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.304 [2024-07-24 23:56:32.799259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.304 [2024-07-24 23:56:32.890801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.304 [2024-07-24 23:56:32.890864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.304 [2024-07-24 23:56:32.890888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.304 [2024-07-24 23:56:32.890902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.304 [2024-07-24 23:56:32.890914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.304 [2024-07-24 23:56:32.891009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.304 [2024-07-24 23:56:32.891088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.304 [2024-07-24 23:56:32.891090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.304 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.304 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:27.304 23:56:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.304 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.304 23:56:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.561 23:56:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.561 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:27.561 [2024-07-24 23:56:33.246235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.561 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.126 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:28.126 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.126 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:28.126 23:56:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:28.383 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:28.640 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d77b76ac-c993-402d-b97a-521ab355e39d 00:16:28.640 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d77b76ac-c993-402d-b97a-521ab355e39d lvol 20 00:16:28.897 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e25b5173-2eaf-4260-923b-178e9082b344 00:16:28.897 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:29.155 23:56:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e25b5173-2eaf-4260-923b-178e9082b344 00:16:29.412 23:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:29.669 [2024-07-24 23:56:35.314993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.669 23:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:29.927 23:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=762533 00:16:29.927 23:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:29.927 23:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:29.927 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.859 23:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e25b5173-2eaf-4260-923b-178e9082b344 MY_SNAPSHOT 00:16:31.425 23:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=13926e76-c022-4406-bfd0-0c18a5e639b4 00:16:31.425 23:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e25b5173-2eaf-4260-923b-178e9082b344 30 00:16:31.682 23:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 13926e76-c022-4406-bfd0-0c18a5e639b4 MY_CLONE 00:16:31.939 23:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f76487e5-97e5-4851-afe8-2c35417c8d73 00:16:31.939 23:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f76487e5-97e5-4851-afe8-2c35417c8d73 00:16:32.505 23:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 762533 00:16:40.608 Initializing NVMe Controllers 00:16:40.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:40.608 Controller IO queue size 128, less than required. 00:16:40.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:40.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:40.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:40.608 Initialization complete. Launching workers. 00:16:40.608 ======================================================== 00:16:40.608 Latency(us) 00:16:40.608 Device Information : IOPS MiB/s Average min max 00:16:40.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9283.00 36.26 13789.92 2177.83 97887.92 00:16:40.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10849.60 42.38 11801.80 2010.20 76178.89 00:16:40.609 ======================================================== 00:16:40.609 Total : 20132.60 78.64 12718.51 2010.20 97887.92 00:16:40.609 00:16:40.609 23:56:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:40.609 23:56:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e25b5173-2eaf-4260-923b-178e9082b344 00:16:40.866 23:56:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d77b76ac-c993-402d-b97a-521ab355e39d 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.123 rmmod nvme_tcp 00:16:41.123 rmmod nvme_fabrics 00:16:41.123 rmmod nvme_keyring 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 762221 ']' 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 762221 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 762221 ']' 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 762221 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 762221 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 762221' 00:16:41.123 killing process with pid 762221 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 762221 00:16:41.123 23:56:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 762221 00:16:41.688 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.689 23:56:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.596 00:16:43.596 real 0m18.548s 00:16:43.596 user 1m2.049s 00:16:43.596 sys 0m6.187s 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:43.596 ************************************ 00:16:43.596 END TEST nvmf_lvol 00:16:43.596 ************************************ 00:16:43.596 23:56:49 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:43.596 23:56:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:43.596 23:56:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:43.596 23:56:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.596 ************************************ 00:16:43.596 START TEST nvmf_lvs_grow 00:16:43.596 ************************************ 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:43.596 * Looking for test storage... 00:16:43.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.596 23:56:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.597 23:56:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:46.129 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:46.130 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:46.130 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:46.130 Found net devices under 0000:09:00.0: cvl_0_0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:46.130 Found net devices under 0000:09:00.1: cvl_0_1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:46.130 00:16:46.130 --- 10.0.0.2 ping statistics --- 00:16:46.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.130 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:16:46.130 00:16:46.130 --- 10.0.0.1 ping statistics --- 00:16:46.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.130 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=765787 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 765787 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 765787 ']' 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 [2024-07-24 23:56:51.452713] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:46.130 [2024-07-24 23:56:51.452788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.130 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.130 [2024-07-24 23:56:51.516941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.130 [2024-07-24 23:56:51.605656] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.130 [2024-07-24 23:56:51.605705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.130 [2024-07-24 23:56:51.605718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.130 [2024-07-24 23:56:51.605729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.130 [2024-07-24 23:56:51.605739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.130 [2024-07-24 23:56:51.605764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.130 23:56:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:46.387 [2024-07-24 23:56:52.016671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.387 ************************************ 00:16:46.387 START TEST lvs_grow_clean 00:16:46.387 ************************************ 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.387 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.953 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:46.953 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:46.953 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:16:46.953 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:16:46.953 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:47.211 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:47.211 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:47.211 23:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4caf248-9e05-4818-9cf3-77d6a9792d1b lvol 150 00:16:47.782 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc670a4d-8fe9-480c-9872-ec67db3c23a8 00:16:47.782 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:47.782 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:47.782 [2024-07-24 23:56:53.420444] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:47.782 [2024-07-24 23:56:53.420523] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:47.782 true 00:16:47.782 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:16:47.782 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:48.041 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:48.041 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:48.299 23:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc670a4d-8fe9-480c-9872-ec67db3c23a8 00:16:48.555 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:48.812 [2024-07-24 23:56:54.399435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.812 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.070 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=766218 00:16:49.070 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:49.070 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.070 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 766218 /var/tmp/bdevperf.sock 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 766218 ']' 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.071 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:49.071 [2024-07-24 23:56:54.692354] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:49.071 [2024-07-24 23:56:54.692459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766218 ] 00:16:49.071 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.071 [2024-07-24 23:56:54.753774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.328 [2024-07-24 23:56:54.846616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.328 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.328 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:49.329 23:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:49.894 Nvme0n1 00:16:49.894 23:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:50.151 [ 00:16:50.151 { 00:16:50.151 "name": "Nvme0n1", 00:16:50.151 "aliases": [ 00:16:50.151 "dc670a4d-8fe9-480c-9872-ec67db3c23a8" 00:16:50.152 ], 00:16:50.152 "product_name": "NVMe disk", 00:16:50.152 "block_size": 4096, 00:16:50.152 "num_blocks": 38912, 00:16:50.152 "uuid": "dc670a4d-8fe9-480c-9872-ec67db3c23a8", 00:16:50.152 "assigned_rate_limits": { 00:16:50.152 "rw_ios_per_sec": 0, 00:16:50.152 "rw_mbytes_per_sec": 0, 00:16:50.152 "r_mbytes_per_sec": 0, 00:16:50.152 "w_mbytes_per_sec": 0 00:16:50.152 }, 00:16:50.152 "claimed": false, 00:16:50.152 "zoned": false, 00:16:50.152 "supported_io_types": { 00:16:50.152 "read": true, 00:16:50.152 "write": true, 00:16:50.152 "unmap": true, 00:16:50.152 "write_zeroes": true, 00:16:50.152 "flush": true, 00:16:50.152 "reset": true, 00:16:50.152 "compare": true, 00:16:50.152 "compare_and_write": true, 00:16:50.152 "abort": true, 00:16:50.152 "nvme_admin": true, 00:16:50.152 "nvme_io": true 00:16:50.152 }, 00:16:50.152 "memory_domains": [ 00:16:50.152 { 00:16:50.152 "dma_device_id": "system", 00:16:50.152 "dma_device_type": 1 00:16:50.152 } 00:16:50.152 ], 00:16:50.152 "driver_specific": { 00:16:50.152 "nvme": [ 00:16:50.152 { 00:16:50.152 "trid": { 00:16:50.152 "trtype": "TCP", 00:16:50.152 "adrfam": "IPv4", 00:16:50.152 "traddr": "10.0.0.2", 00:16:50.152 "trsvcid": "4420", 00:16:50.152 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:50.152 }, 00:16:50.152 "ctrlr_data": { 00:16:50.152 "cntlid": 1, 00:16:50.152 "vendor_id": "0x8086", 00:16:50.152 "model_number": "SPDK bdev Controller", 00:16:50.152 "serial_number": "SPDK0", 00:16:50.152 "firmware_revision": "24.05.1", 00:16:50.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:50.152 "oacs": { 00:16:50.152 "security": 0, 00:16:50.152 "format": 0, 00:16:50.152 "firmware": 0, 00:16:50.152 "ns_manage": 0 00:16:50.152 }, 00:16:50.152 "multi_ctrlr": true, 00:16:50.152 "ana_reporting": false 00:16:50.152 }, 00:16:50.152 "vs": { 00:16:50.152 "nvme_version": "1.3" 00:16:50.152 }, 00:16:50.152 "ns_data": { 00:16:50.152 "id": 1, 00:16:50.152 "can_share": true 00:16:50.152 } 00:16:50.152 } 00:16:50.152 ], 00:16:50.152 "mp_policy": "active_passive" 00:16:50.152 } 00:16:50.152 } 00:16:50.152 ] 00:16:50.152 23:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=766353 00:16:50.152 23:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:50.152 23:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:50.152 Running I/O for 10 seconds... 00:16:51.120 Latency(us) 00:16:51.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.120 Nvme0n1 : 1.00 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:16:51.120 =================================================================================================================== 00:16:51.120 Total : 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:16:51.120 00:16:52.054 23:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:16:52.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.054 Nvme0n1 : 2.00 14624.50 57.13 0.00 0.00 0.00 0.00 0.00 00:16:52.054 =================================================================================================================== 00:16:52.054 Total : 14624.50 57.13 0.00 0.00 0.00 0.00 0.00 00:16:52.054 00:16:52.312 true 00:16:52.312 23:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:16:52.312 23:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:52.570 23:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:52.570 23:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:52.570 23:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 766353 00:16:53.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.136 Nvme0n1 : 3.00 14630.67 57.15 0.00 0.00 0.00 0.00 0.00 00:16:53.136 =================================================================================================================== 00:16:53.136 Total : 14630.67 57.15 0.00 0.00 0.00 0.00 0.00 00:16:53.136 00:16:54.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.068 Nvme0n1 : 4.00 14789.50 57.77 0.00 0.00 0.00 0.00 0.00 00:16:54.068 =================================================================================================================== 00:16:54.068 Total : 14789.50 57.77 0.00 0.00 0.00 0.00 0.00 00:16:54.068 00:16:55.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.444 Nvme0n1 : 5.00 14840.40 57.97 0.00 0.00 0.00 0.00 0.00 00:16:55.444 =================================================================================================================== 00:16:55.444 Total : 14840.40 57.97 0.00 0.00 0.00 0.00 0.00 00:16:55.444 00:16:56.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.379 Nvme0n1 : 6.00 14890.50 58.17 0.00 0.00 0.00 0.00 0.00 00:16:56.379 =================================================================================================================== 00:16:56.379 Total : 14890.50 58.17 0.00 0.00 0.00 0.00 0.00 00:16:56.379 00:16:57.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.314 Nvme0n1 : 7.00 14951.86 58.41 0.00 0.00 0.00 0.00 0.00 00:16:57.314 =================================================================================================================== 00:16:57.314 Total : 14951.86 58.41 0.00 0.00 0.00 0.00 0.00 00:16:57.314 00:16:58.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.247 Nvme0n1 : 8.00 14969.62 58.48 0.00 0.00 0.00 0.00 0.00 00:16:58.247 =================================================================================================================== 00:16:58.247 Total : 14969.62 58.48 0.00 0.00 0.00 0.00 0.00 00:16:58.247 00:16:59.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.182 Nvme0n1 : 9.00 15010.00 58.63 0.00 0.00 0.00 0.00 0.00 00:16:59.182 =================================================================================================================== 00:16:59.182 Total : 15010.00 58.63 0.00 0.00 0.00 0.00 0.00 00:16:59.182 00:17:00.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.115 Nvme0n1 : 10.00 15025.10 58.69 0.00 0.00 0.00 0.00 0.00 00:17:00.115 =================================================================================================================== 00:17:00.115 Total : 15025.10 58.69 0.00 0.00 0.00 0.00 0.00 00:17:00.115 00:17:00.115 00:17:00.115 Latency(us) 00:17:00.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.115 Nvme0n1 : 10.01 15022.48 58.68 0.00 0.00 8514.59 2548.62 16311.18 00:17:00.115 =================================================================================================================== 00:17:00.115 Total : 15022.48 58.68 0.00 0.00 8514.59 2548.62 16311.18 00:17:00.115 0 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 766218 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 766218 ']' 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 766218 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 766218 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 766218' 00:17:00.115 killing process with pid 766218 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 766218 00:17:00.115 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.115 00:17:00.115 Latency(us) 00:17:00.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.115 =================================================================================================================== 00:17:00.115 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.115 23:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 766218 00:17:00.372 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:00.630 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:01.195 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:01.195 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:01.195 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:01.195 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:01.195 23:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:01.452 [2024-07-24 23:57:07.091883] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.452 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:01.453 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:01.710 request: 00:17:01.710 { 00:17:01.710 "uuid": "b4caf248-9e05-4818-9cf3-77d6a9792d1b", 00:17:01.710 "method": "bdev_lvol_get_lvstores", 00:17:01.710 "req_id": 1 00:17:01.710 } 00:17:01.710 Got JSON-RPC error response 00:17:01.710 response: 00:17:01.710 { 00:17:01.710 "code": -19, 00:17:01.710 "message": "No such device" 00:17:01.710 } 00:17:01.710 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:01.710 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.710 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.710 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.710 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:01.968 aio_bdev 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc670a4d-8fe9-480c-9872-ec67db3c23a8 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=dc670a4d-8fe9-480c-9872-ec67db3c23a8 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:01.968 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:02.226 23:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc670a4d-8fe9-480c-9872-ec67db3c23a8 -t 2000 00:17:02.484 [ 00:17:02.484 { 00:17:02.484 "name": "dc670a4d-8fe9-480c-9872-ec67db3c23a8", 00:17:02.484 "aliases": [ 00:17:02.484 "lvs/lvol" 00:17:02.484 ], 00:17:02.484 "product_name": "Logical Volume", 00:17:02.484 "block_size": 4096, 00:17:02.484 "num_blocks": 38912, 00:17:02.484 "uuid": "dc670a4d-8fe9-480c-9872-ec67db3c23a8", 00:17:02.484 "assigned_rate_limits": { 00:17:02.484 "rw_ios_per_sec": 0, 00:17:02.484 "rw_mbytes_per_sec": 0, 00:17:02.484 "r_mbytes_per_sec": 0, 00:17:02.484 "w_mbytes_per_sec": 0 00:17:02.484 }, 00:17:02.484 "claimed": false, 00:17:02.484 "zoned": false, 00:17:02.484 "supported_io_types": { 00:17:02.484 "read": true, 00:17:02.484 "write": true, 00:17:02.484 "unmap": true, 00:17:02.484 "write_zeroes": true, 00:17:02.484 "flush": false, 00:17:02.484 "reset": true, 00:17:02.484 "compare": false, 00:17:02.484 "compare_and_write": false, 00:17:02.484 "abort": false, 00:17:02.484 "nvme_admin": false, 00:17:02.484 "nvme_io": false 00:17:02.484 }, 00:17:02.484 "driver_specific": { 00:17:02.484 "lvol": { 00:17:02.484 "lvol_store_uuid": "b4caf248-9e05-4818-9cf3-77d6a9792d1b", 00:17:02.484 "base_bdev": "aio_bdev", 00:17:02.484 "thin_provision": false, 00:17:02.484 "num_allocated_clusters": 38, 00:17:02.484 "snapshot": false, 00:17:02.484 "clone": false, 00:17:02.484 "esnap_clone": false 00:17:02.484 } 00:17:02.484 } 00:17:02.484 } 00:17:02.484 ] 00:17:02.484 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:02.484 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:02.484 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:02.742 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:02.742 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:02.742 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:03.000 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:03.000 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc670a4d-8fe9-480c-9872-ec67db3c23a8 00:17:03.258 23:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4caf248-9e05-4818-9cf3-77d6a9792d1b 00:17:03.515 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.771 00:17:03.771 real 0m17.345s 00:17:03.771 user 0m16.778s 00:17:03.771 sys 0m1.907s 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:03.771 ************************************ 00:17:03.771 END TEST lvs_grow_clean 00:17:03.771 ************************************ 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:03.771 ************************************ 00:17:03.771 START TEST lvs_grow_dirty 00:17:03.771 ************************************ 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.771 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:04.028 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:04.028 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:04.285 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:04.285 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:04.285 23:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:04.542 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:04.542 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:04.542 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3259cb3-c102-4b4b-abe9-ad8896f61306 lvol 150 00:17:04.799 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:04.799 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:04.799 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:05.057 [2024-07-24 23:57:10.686225] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:05.057 [2024-07-24 23:57:10.686307] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:05.057 true 00:17:05.057 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:05.057 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:05.314 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:05.314 23:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:05.572 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:05.829 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:06.087 [2024-07-24 23:57:11.649193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.087 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=768882 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 768882 /var/tmp/bdevperf.sock 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 768882 ']' 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.346 23:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:06.346 [2024-07-24 23:57:11.952164] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.346 [2024-07-24 23:57:11.952257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768882 ] 00:17:06.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.346 [2024-07-24 23:57:12.011541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.604 [2024-07-24 23:57:12.098795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.604 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.604 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:06.604 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:06.861 Nvme0n1 00:17:06.861 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:07.427 [ 00:17:07.427 { 00:17:07.427 "name": "Nvme0n1", 00:17:07.427 "aliases": [ 00:17:07.427 "f8cbfdd5-00fe-450b-a1c5-6624b2738879" 00:17:07.427 ], 00:17:07.427 "product_name": "NVMe disk", 00:17:07.427 "block_size": 4096, 00:17:07.427 "num_blocks": 38912, 00:17:07.427 "uuid": "f8cbfdd5-00fe-450b-a1c5-6624b2738879", 00:17:07.427 "assigned_rate_limits": { 00:17:07.427 "rw_ios_per_sec": 0, 00:17:07.427 "rw_mbytes_per_sec": 0, 00:17:07.427 "r_mbytes_per_sec": 0, 00:17:07.427 "w_mbytes_per_sec": 0 00:17:07.427 }, 00:17:07.427 "claimed": false, 00:17:07.427 "zoned": false, 00:17:07.427 "supported_io_types": { 00:17:07.427 "read": true, 00:17:07.427 "write": true, 00:17:07.427 "unmap": true, 00:17:07.427 "write_zeroes": true, 00:17:07.427 "flush": true, 00:17:07.427 "reset": true, 00:17:07.427 "compare": true, 00:17:07.427 "compare_and_write": true, 00:17:07.427 "abort": true, 00:17:07.427 "nvme_admin": true, 00:17:07.427 "nvme_io": true 00:17:07.427 }, 00:17:07.427 "memory_domains": [ 00:17:07.427 { 00:17:07.427 "dma_device_id": "system", 00:17:07.427 "dma_device_type": 1 00:17:07.427 } 00:17:07.427 ], 00:17:07.427 "driver_specific": { 00:17:07.427 "nvme": [ 00:17:07.427 { 00:17:07.427 "trid": { 00:17:07.427 "trtype": "TCP", 00:17:07.427 "adrfam": "IPv4", 00:17:07.427 "traddr": "10.0.0.2", 00:17:07.427 "trsvcid": "4420", 00:17:07.427 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:07.427 }, 00:17:07.427 "ctrlr_data": { 00:17:07.427 "cntlid": 1, 00:17:07.427 "vendor_id": "0x8086", 00:17:07.427 "model_number": "SPDK bdev Controller", 00:17:07.427 "serial_number": "SPDK0", 00:17:07.427 "firmware_revision": "24.05.1", 00:17:07.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:07.427 "oacs": { 00:17:07.427 "security": 0, 00:17:07.427 "format": 0, 00:17:07.427 "firmware": 0, 00:17:07.427 "ns_manage": 0 00:17:07.427 }, 00:17:07.427 "multi_ctrlr": true, 00:17:07.427 "ana_reporting": false 00:17:07.427 }, 00:17:07.427 "vs": { 00:17:07.427 "nvme_version": "1.3" 00:17:07.427 }, 00:17:07.427 "ns_data": { 00:17:07.427 "id": 1, 00:17:07.427 "can_share": true 00:17:07.427 } 00:17:07.427 } 00:17:07.427 ], 00:17:07.427 "mp_policy": "active_passive" 00:17:07.427 } 00:17:07.427 } 00:17:07.427 ] 00:17:07.427 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=769014 00:17:07.427 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:07.427 23:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:07.427 Running I/O for 10 seconds... 00:17:08.364 Latency(us) 00:17:08.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.364 Nvme0n1 : 1.00 14324.00 55.95 0.00 0.00 0.00 0.00 0.00 00:17:08.364 =================================================================================================================== 00:17:08.364 Total : 14324.00 55.95 0.00 0.00 0.00 0.00 0.00 00:17:08.364 00:17:09.335 23:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:09.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.335 Nvme0n1 : 2.00 14399.50 56.25 0.00 0.00 0.00 0.00 0.00 00:17:09.335 =================================================================================================================== 00:17:09.335 Total : 14399.50 56.25 0.00 0.00 0.00 0.00 0.00 00:17:09.335 00:17:09.593 true 00:17:09.593 23:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:09.593 23:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:09.851 23:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:09.851 23:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:09.851 23:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 769014 00:17:10.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.417 Nvme0n1 : 3.00 14699.33 57.42 0.00 0.00 0.00 0.00 0.00 00:17:10.417 =================================================================================================================== 00:17:10.417 Total : 14699.33 57.42 0.00 0.00 0.00 0.00 0.00 00:17:10.417 00:17:11.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.352 Nvme0n1 : 4.00 14814.75 57.87 0.00 0.00 0.00 0.00 0.00 00:17:11.352 =================================================================================================================== 00:17:11.352 Total : 14814.75 57.87 0.00 0.00 0.00 0.00 0.00 00:17:11.352 00:17:12.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.285 Nvme0n1 : 5.00 14870.00 58.09 0.00 0.00 0.00 0.00 0.00 00:17:12.285 =================================================================================================================== 00:17:12.285 Total : 14870.00 58.09 0.00 0.00 0.00 0.00 0.00 00:17:12.285 00:17:13.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.658 Nvme0n1 : 6.00 14985.67 58.54 0.00 0.00 0.00 0.00 0.00 00:17:13.658 =================================================================================================================== 00:17:13.658 Total : 14985.67 58.54 0.00 0.00 0.00 0.00 0.00 00:17:13.658 00:17:14.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.591 Nvme0n1 : 7.00 15002.14 58.60 0.00 0.00 0.00 0.00 0.00 00:17:14.591 =================================================================================================================== 00:17:14.591 Total : 15002.14 58.60 0.00 0.00 0.00 0.00 0.00 00:17:14.591 00:17:15.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.524 Nvme0n1 : 8.00 15067.50 58.86 0.00 0.00 0.00 0.00 0.00 00:17:15.524 =================================================================================================================== 00:17:15.524 Total : 15067.50 58.86 0.00 0.00 0.00 0.00 0.00 00:17:15.524 00:17:16.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.456 Nvme0n1 : 9.00 15108.00 59.02 0.00 0.00 0.00 0.00 0.00 00:17:16.456 =================================================================================================================== 00:17:16.456 Total : 15108.00 59.02 0.00 0.00 0.00 0.00 0.00 00:17:16.456 00:17:17.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.390 Nvme0n1 : 10.00 15118.20 59.06 0.00 0.00 0.00 0.00 0.00 00:17:17.390 =================================================================================================================== 00:17:17.390 Total : 15118.20 59.06 0.00 0.00 0.00 0.00 0.00 00:17:17.390 00:17:17.390 00:17:17.390 Latency(us) 00:17:17.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.390 Nvme0n1 : 10.01 15122.98 59.07 0.00 0.00 8458.48 2293.76 16893.72 00:17:17.390 =================================================================================================================== 00:17:17.390 Total : 15122.98 59.07 0.00 0.00 8458.48 2293.76 16893.72 00:17:17.390 0 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 768882 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 768882 ']' 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 768882 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 768882 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 768882' 00:17:17.390 killing process with pid 768882 00:17:17.390 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 768882 00:17:17.391 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.391 00:17:17.391 Latency(us) 00:17:17.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.391 =================================================================================================================== 00:17:17.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.391 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 768882 00:17:17.648 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:17.905 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:18.470 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:18.470 23:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 765787 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 765787 00:17:18.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 765787 Killed "${NVMF_APP[@]}" "$@" 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=770342 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 770342 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 770342 ']' 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.470 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.471 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.471 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:18.728 [2024-07-24 23:57:24.194407] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:18.729 [2024-07-24 23:57:24.194503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.729 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.729 [2024-07-24 23:57:24.260980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.729 [2024-07-24 23:57:24.348632] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.729 [2024-07-24 23:57:24.348703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.729 [2024-07-24 23:57:24.348718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.729 [2024-07-24 23:57:24.348729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.729 [2024-07-24 23:57:24.348738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.729 [2024-07-24 23:57:24.348768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.986 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.987 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:19.244 [2024-07-24 23:57:24.768849] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:19.244 [2024-07-24 23:57:24.768997] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:19.244 [2024-07-24 23:57:24.769055] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:19.244 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:19.245 23:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:19.502 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8cbfdd5-00fe-450b-a1c5-6624b2738879 -t 2000 00:17:19.760 [ 00:17:19.760 { 00:17:19.760 "name": "f8cbfdd5-00fe-450b-a1c5-6624b2738879", 00:17:19.760 "aliases": [ 00:17:19.760 "lvs/lvol" 00:17:19.760 ], 00:17:19.760 "product_name": "Logical Volume", 00:17:19.760 "block_size": 4096, 00:17:19.760 "num_blocks": 38912, 00:17:19.760 "uuid": "f8cbfdd5-00fe-450b-a1c5-6624b2738879", 00:17:19.760 "assigned_rate_limits": { 00:17:19.760 "rw_ios_per_sec": 0, 00:17:19.760 "rw_mbytes_per_sec": 0, 00:17:19.760 "r_mbytes_per_sec": 0, 00:17:19.760 "w_mbytes_per_sec": 0 00:17:19.760 }, 00:17:19.760 "claimed": false, 00:17:19.760 "zoned": false, 00:17:19.760 "supported_io_types": { 00:17:19.760 "read": true, 00:17:19.760 "write": true, 00:17:19.760 "unmap": true, 00:17:19.760 "write_zeroes": true, 00:17:19.760 "flush": false, 00:17:19.760 "reset": true, 00:17:19.760 "compare": false, 00:17:19.760 "compare_and_write": false, 00:17:19.760 "abort": false, 00:17:19.760 "nvme_admin": false, 00:17:19.760 "nvme_io": false 00:17:19.760 }, 00:17:19.760 "driver_specific": { 00:17:19.760 "lvol": { 00:17:19.760 "lvol_store_uuid": "e3259cb3-c102-4b4b-abe9-ad8896f61306", 00:17:19.760 "base_bdev": "aio_bdev", 00:17:19.760 "thin_provision": false, 00:17:19.760 "num_allocated_clusters": 38, 00:17:19.760 "snapshot": false, 00:17:19.760 "clone": false, 00:17:19.760 "esnap_clone": false 00:17:19.760 } 00:17:19.760 } 00:17:19.760 } 00:17:19.760 ] 00:17:19.760 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:19.760 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:19.760 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:20.018 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:20.018 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:20.018 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:20.276 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:20.276 23:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:20.276 [2024-07-24 23:57:25.985465] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:20.535 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:20.792 request: 00:17:20.792 { 00:17:20.792 "uuid": "e3259cb3-c102-4b4b-abe9-ad8896f61306", 00:17:20.792 "method": "bdev_lvol_get_lvstores", 00:17:20.792 "req_id": 1 00:17:20.792 } 00:17:20.792 Got JSON-RPC error response 00:17:20.792 response: 00:17:20.792 { 00:17:20.792 "code": -19, 00:17:20.792 "message": "No such device" 00:17:20.792 } 00:17:20.792 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:20.792 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.792 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.793 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.793 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:20.793 aio_bdev 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:21.050 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.308 23:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8cbfdd5-00fe-450b-a1c5-6624b2738879 -t 2000 00:17:21.566 [ 00:17:21.566 { 00:17:21.566 "name": "f8cbfdd5-00fe-450b-a1c5-6624b2738879", 00:17:21.566 "aliases": [ 00:17:21.566 "lvs/lvol" 00:17:21.566 ], 00:17:21.566 "product_name": "Logical Volume", 00:17:21.566 "block_size": 4096, 00:17:21.566 "num_blocks": 38912, 00:17:21.566 "uuid": "f8cbfdd5-00fe-450b-a1c5-6624b2738879", 00:17:21.566 "assigned_rate_limits": { 00:17:21.566 "rw_ios_per_sec": 0, 00:17:21.566 "rw_mbytes_per_sec": 0, 00:17:21.566 "r_mbytes_per_sec": 0, 00:17:21.566 "w_mbytes_per_sec": 0 00:17:21.566 }, 00:17:21.566 "claimed": false, 00:17:21.566 "zoned": false, 00:17:21.566 "supported_io_types": { 00:17:21.566 "read": true, 00:17:21.566 "write": true, 00:17:21.566 "unmap": true, 00:17:21.566 "write_zeroes": true, 00:17:21.566 "flush": false, 00:17:21.566 "reset": true, 00:17:21.566 "compare": false, 00:17:21.566 "compare_and_write": false, 00:17:21.566 "abort": false, 00:17:21.566 "nvme_admin": false, 00:17:21.566 "nvme_io": false 00:17:21.566 }, 00:17:21.566 "driver_specific": { 00:17:21.566 "lvol": { 00:17:21.566 "lvol_store_uuid": "e3259cb3-c102-4b4b-abe9-ad8896f61306", 00:17:21.566 "base_bdev": "aio_bdev", 00:17:21.566 "thin_provision": false, 00:17:21.566 "num_allocated_clusters": 38, 00:17:21.566 "snapshot": false, 00:17:21.566 "clone": false, 00:17:21.566 "esnap_clone": false 00:17:21.566 } 00:17:21.566 } 00:17:21.566 } 00:17:21.566 ] 00:17:21.566 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:21.566 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:21.566 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:21.824 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:21.824 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:21.824 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:22.082 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:22.082 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8cbfdd5-00fe-450b-a1c5-6624b2738879 00:17:22.340 23:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3259cb3-c102-4b4b-abe9-ad8896f61306 00:17:22.597 23:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.856 00:17:22.856 real 0m19.007s 00:17:22.856 user 0m47.897s 00:17:22.856 sys 0m4.742s 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.856 ************************************ 00:17:22.856 END TEST lvs_grow_dirty 00:17:22.856 ************************************ 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:22.856 nvmf_trace.0 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.856 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.856 rmmod nvme_tcp 00:17:22.856 rmmod nvme_fabrics 00:17:23.114 rmmod nvme_keyring 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 770342 ']' 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 770342 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 770342 ']' 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 770342 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 770342 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 770342' 00:17:23.114 killing process with pid 770342 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 770342 00:17:23.114 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 770342 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.372 23:57:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.275 23:57:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.275 00:17:25.275 real 0m41.672s 00:17:25.275 user 1m10.406s 00:17:25.275 sys 0m8.545s 00:17:25.275 23:57:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.275 23:57:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 ************************************ 00:17:25.275 END TEST nvmf_lvs_grow 00:17:25.275 ************************************ 00:17:25.275 23:57:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:25.275 23:57:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:25.275 23:57:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:25.276 23:57:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.276 ************************************ 00:17:25.276 START TEST nvmf_bdev_io_wait 00:17:25.276 ************************************ 00:17:25.276 23:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:25.534 * Looking for test storage... 00:17:25.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.534 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.535 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.535 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.535 23:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:27.516 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:27.516 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:27.516 Found net devices under 0000:09:00.0: cvl_0_0 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:27.516 Found net devices under 0000:09:00.1: cvl_0_1 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.516 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.517 23:57:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:27.517 00:17:27.517 --- 10.0.0.2 ping statistics --- 00:17:27.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.517 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:27.517 00:17:27.517 --- 10.0.0.1 ping statistics --- 00:17:27.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.517 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=772860 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 772860 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 772860 ']' 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.517 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.791 [2024-07-24 23:57:33.184307] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:27.791 [2024-07-24 23:57:33.184400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.791 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.791 [2024-07-24 23:57:33.254331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.791 [2024-07-24 23:57:33.347986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.791 [2024-07-24 23:57:33.348046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.791 [2024-07-24 23:57:33.348063] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.791 [2024-07-24 23:57:33.348077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.791 [2024-07-24 23:57:33.348089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.791 [2024-07-24 23:57:33.348167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.791 [2024-07-24 23:57:33.348245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.791 [2024-07-24 23:57:33.348340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.791 [2024-07-24 23:57:33.348343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.791 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:27.792 [2024-07-24 23:57:33.494240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.792 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:28.049 Malloc0 00:17:28.049 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.049 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.049 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:28.050 [2024-07-24 23:57:33.563868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=772887 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=772889 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.050 { 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme$subsystem", 00:17:28.050 "trtype": "$TEST_TRANSPORT", 00:17:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "$NVMF_PORT", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.050 "hdgst": ${hdgst:-false}, 00:17:28.050 "ddgst": ${ddgst:-false} 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 } 00:17:28.050 EOF 00:17:28.050 )") 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=772891 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.050 { 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme$subsystem", 00:17:28.050 "trtype": "$TEST_TRANSPORT", 00:17:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "$NVMF_PORT", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.050 "hdgst": ${hdgst:-false}, 00:17:28.050 "ddgst": ${ddgst:-false} 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 } 00:17:28.050 EOF 00:17:28.050 )") 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=772894 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.050 { 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme$subsystem", 00:17:28.050 "trtype": "$TEST_TRANSPORT", 00:17:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "$NVMF_PORT", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.050 "hdgst": ${hdgst:-false}, 00:17:28.050 "ddgst": ${ddgst:-false} 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 } 00:17:28.050 EOF 00:17:28.050 )") 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.050 { 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme$subsystem", 00:17:28.050 "trtype": "$TEST_TRANSPORT", 00:17:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "$NVMF_PORT", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.050 "hdgst": ${hdgst:-false}, 00:17:28.050 "ddgst": ${ddgst:-false} 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 } 00:17:28.050 EOF 00:17:28.050 )") 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 772887 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme1", 00:17:28.050 "trtype": "tcp", 00:17:28.050 "traddr": "10.0.0.2", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "4420", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.050 "hdgst": false, 00:17:28.050 "ddgst": false 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 }' 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme1", 00:17:28.050 "trtype": "tcp", 00:17:28.050 "traddr": "10.0.0.2", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "4420", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.050 "hdgst": false, 00:17:28.050 "ddgst": false 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 }' 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme1", 00:17:28.050 "trtype": "tcp", 00:17:28.050 "traddr": "10.0.0.2", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "4420", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.050 "hdgst": false, 00:17:28.050 "ddgst": false 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 }' 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:28.050 23:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.050 "params": { 00:17:28.050 "name": "Nvme1", 00:17:28.050 "trtype": "tcp", 00:17:28.050 "traddr": "10.0.0.2", 00:17:28.050 "adrfam": "ipv4", 00:17:28.050 "trsvcid": "4420", 00:17:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.050 "hdgst": false, 00:17:28.050 "ddgst": false 00:17:28.050 }, 00:17:28.050 "method": "bdev_nvme_attach_controller" 00:17:28.050 }' 00:17:28.051 [2024-07-24 23:57:33.610736] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:28.051 [2024-07-24 23:57:33.610736] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:28.051 [2024-07-24 23:57:33.610736] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:28.051 [2024-07-24 23:57:33.610829] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 23:57:33.610830] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 23:57:33.610829] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:28.051 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:28.051 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:28.051 [2024-07-24 23:57:33.612150] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:28.051 [2024-07-24 23:57:33.612221] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:28.051 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.051 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.309 [2024-07-24 23:57:33.790775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.309 [2024-07-24 23:57:33.871574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:28.309 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.309 [2024-07-24 23:57:33.900875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.309 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.309 [2024-07-24 23:57:33.980278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:28.309 [2024-07-24 23:57:34.004956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.566 [2024-07-24 23:57:34.084137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.566 [2024-07-24 23:57:34.088580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.566 [2024-07-24 23:57:34.157850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:28.824 Running I/O for 1 seconds... 00:17:28.824 Running I/O for 1 seconds... 00:17:28.824 Running I/O for 1 seconds... 00:17:28.824 Running I/O for 1 seconds... 00:17:29.758 00:17:29.758 Latency(us) 00:17:29.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.758 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:29.758 Nvme1n1 : 1.01 9930.71 38.79 0.00 0.00 12838.20 7087.60 21554.06 00:17:29.758 =================================================================================================================== 00:17:29.758 Total : 9930.71 38.79 0.00 0.00 12838.20 7087.60 21554.06 00:17:29.758 00:17:29.758 Latency(us) 00:17:29.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.758 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:29.758 Nvme1n1 : 1.00 200472.65 783.10 0.00 0.00 636.08 263.96 831.34 00:17:29.758 =================================================================================================================== 00:17:29.758 Total : 200472.65 783.10 0.00 0.00 636.08 263.96 831.34 00:17:29.758 00:17:29.759 Latency(us) 00:17:29.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.759 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:29.759 Nvme1n1 : 1.01 9093.21 35.52 0.00 0.00 14004.76 9417.77 25826.04 00:17:29.759 =================================================================================================================== 00:17:29.759 Total : 9093.21 35.52 0.00 0.00 14004.76 9417.77 25826.04 00:17:29.759 00:17:29.759 Latency(us) 00:17:29.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.759 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:29.759 Nvme1n1 : 1.01 9377.55 36.63 0.00 0.00 13599.62 6359.42 24660.95 00:17:29.759 =================================================================================================================== 00:17:29.759 Total : 9377.55 36.63 0.00 0.00 13599.62 6359.42 24660.95 00:17:30.017 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 772889 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 772891 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 772894 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.276 rmmod nvme_tcp 00:17:30.276 rmmod nvme_fabrics 00:17:30.276 rmmod nvme_keyring 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 772860 ']' 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 772860 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 772860 ']' 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 772860 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 772860 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 772860' 00:17:30.276 killing process with pid 772860 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 772860 00:17:30.276 23:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 772860 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.536 23:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.441 23:57:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.441 00:17:32.441 real 0m7.151s 00:17:32.441 user 0m15.963s 00:17:32.441 sys 0m3.746s 00:17:32.441 23:57:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.441 23:57:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:32.441 ************************************ 00:17:32.441 END TEST nvmf_bdev_io_wait 00:17:32.441 ************************************ 00:17:32.441 23:57:38 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:32.441 23:57:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:32.441 23:57:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.441 23:57:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.441 ************************************ 00:17:32.441 START TEST nvmf_queue_depth 00:17:32.441 ************************************ 00:17:32.441 23:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:32.700 * Looking for test storage... 00:17:32.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.700 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.701 23:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:34.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:34.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:34.606 Found net devices under 0000:09:00.0: cvl_0_0 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:34.606 Found net devices under 0000:09:00.1: cvl_0_1 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.606 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.607 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:17:34.865 00:17:34.865 --- 10.0.0.2 ping statistics --- 00:17:34.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.865 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:34.865 00:17:34.865 --- 10.0.0.1 ping statistics --- 00:17:34.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.865 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=775108 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 775108 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 775108 ']' 00:17:34.865 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.866 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.866 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.866 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.866 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:34.866 [2024-07-24 23:57:40.404141] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.866 [2024-07-24 23:57:40.404242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.866 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.866 [2024-07-24 23:57:40.472956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.866 [2024-07-24 23:57:40.566749] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.866 [2024-07-24 23:57:40.566805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.866 [2024-07-24 23:57:40.566832] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.866 [2024-07-24 23:57:40.566845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.866 [2024-07-24 23:57:40.566858] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.866 [2024-07-24 23:57:40.566887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 [2024-07-24 23:57:40.712820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 Malloc0 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 [2024-07-24 23:57:40.783031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=775249 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 775249 /var/tmp/bdevperf.sock 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 775249 ']' 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:35.124 23:57:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.124 [2024-07-24 23:57:40.830864] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:35.124 [2024-07-24 23:57:40.830938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775249 ] 00:17:35.382 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.382 [2024-07-24 23:57:40.889866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.382 [2024-07-24 23:57:40.976475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.382 23:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:35.382 23:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:35.382 23:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:35.382 23:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.382 23:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:35.639 NVMe0n1 00:17:35.639 23:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.639 23:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.896 Running I/O for 10 seconds... 00:17:45.862 00:17:45.862 Latency(us) 00:17:45.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.862 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:45.862 Verification LBA range: start 0x0 length 0x4000 00:17:45.862 NVMe0n1 : 10.10 8504.30 33.22 0.00 0.00 119935.30 24272.59 83497.72 00:17:45.862 =================================================================================================================== 00:17:45.862 Total : 8504.30 33.22 0.00 0.00 119935.30 24272.59 83497.72 00:17:45.862 0 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 775249 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 775249 ']' 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 775249 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 775249 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 775249' 00:17:45.862 killing process with pid 775249 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 775249 00:17:45.862 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.862 00:17:45.862 Latency(us) 00:17:45.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.862 =================================================================================================================== 00:17:45.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.862 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 775249 00:17:46.119 23:57:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.120 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.120 rmmod nvme_tcp 00:17:46.120 rmmod nvme_fabrics 00:17:46.377 rmmod nvme_keyring 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 775108 ']' 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 775108 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 775108 ']' 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 775108 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 775108 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 775108' 00:17:46.377 killing process with pid 775108 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 775108 00:17:46.377 23:57:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 775108 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.635 23:57:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.536 23:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:48.536 00:17:48.536 real 0m16.062s 00:17:48.536 user 0m22.589s 00:17:48.536 sys 0m3.067s 00:17:48.536 23:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:48.536 23:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:48.536 ************************************ 00:17:48.536 END TEST nvmf_queue_depth 00:17:48.536 ************************************ 00:17:48.536 23:57:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:48.536 23:57:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:48.536 23:57:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:48.536 23:57:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.794 ************************************ 00:17:48.794 START TEST nvmf_target_multipath 00:17:48.794 ************************************ 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:48.794 * Looking for test storage... 00:17:48.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.794 23:57:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.795 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:48.795 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:48.795 23:57:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.795 23:57:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:50.698 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:50.698 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:50.698 Found net devices under 0000:09:00.0: cvl_0_0 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:50.698 Found net devices under 0000:09:00.1: cvl_0_1 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:17:50.698 00:17:50.698 --- 10.0.0.2 ping statistics --- 00:17:50.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.698 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:17:50.698 00:17:50.698 --- 10.0.0.1 ping statistics --- 00:17:50.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.698 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.698 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:50.699 only one NIC for nvmf test 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.699 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.957 rmmod nvme_tcp 00:17:50.957 rmmod nvme_fabrics 00:17:50.957 rmmod nvme_keyring 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.957 23:57:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.860 00:17:52.860 real 0m4.258s 00:17:52.860 user 0m0.791s 00:17:52.860 sys 0m1.471s 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:52.860 23:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:52.860 ************************************ 00:17:52.860 END TEST nvmf_target_multipath 00:17:52.860 ************************************ 00:17:52.860 23:57:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:52.860 23:57:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:52.860 23:57:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:52.860 23:57:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.861 ************************************ 00:17:52.861 START TEST nvmf_zcopy 00:17:52.861 ************************************ 00:17:52.861 23:57:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:53.119 * Looking for test storage... 00:17:53.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.119 23:57:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.120 23:57:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:55.053 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:55.053 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:55.053 Found net devices under 0000:09:00.0: cvl_0_0 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:55.053 Found net devices under 0000:09:00.1: cvl_0_1 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:17:55.053 00:17:55.053 --- 10.0.0.2 ping statistics --- 00:17:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.053 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:17:55.053 00:17:55.053 --- 10.0.0.1 ping statistics --- 00:17:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.053 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.053 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=780295 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 780295 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 780295 ']' 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.054 23:58:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.312 [2024-07-24 23:58:00.799625] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:55.312 [2024-07-24 23:58:00.799715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.312 [2024-07-24 23:58:00.862706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.312 [2024-07-24 23:58:00.952180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.312 [2024-07-24 23:58:00.952251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.312 [2024-07-24 23:58:00.952279] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.312 [2024-07-24 23:58:00.952293] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.312 [2024-07-24 23:58:00.952305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.312 [2024-07-24 23:58:00.952346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 [2024-07-24 23:58:01.099857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 [2024-07-24 23:58:01.116057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 malloc0 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.570 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:55.571 { 00:17:55.571 "params": { 00:17:55.571 "name": "Nvme$subsystem", 00:17:55.571 "trtype": "$TEST_TRANSPORT", 00:17:55.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.571 "adrfam": "ipv4", 00:17:55.571 "trsvcid": "$NVMF_PORT", 00:17:55.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.571 "hdgst": ${hdgst:-false}, 00:17:55.571 "ddgst": ${ddgst:-false} 00:17:55.571 }, 00:17:55.571 "method": "bdev_nvme_attach_controller" 00:17:55.571 } 00:17:55.571 EOF 00:17:55.571 )") 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:55.571 23:58:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:55.571 "params": { 00:17:55.571 "name": "Nvme1", 00:17:55.571 "trtype": "tcp", 00:17:55.571 "traddr": "10.0.0.2", 00:17:55.571 "adrfam": "ipv4", 00:17:55.571 "trsvcid": "4420", 00:17:55.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.571 "hdgst": false, 00:17:55.571 "ddgst": false 00:17:55.571 }, 00:17:55.571 "method": "bdev_nvme_attach_controller" 00:17:55.571 }' 00:17:55.571 [2024-07-24 23:58:01.196179] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:55.571 [2024-07-24 23:58:01.196249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780437 ] 00:17:55.571 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.571 [2024-07-24 23:58:01.260553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.828 [2024-07-24 23:58:01.357986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.085 Running I/O for 10 seconds... 00:18:06.043 00:18:06.043 Latency(us) 00:18:06.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.043 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:06.043 Verification LBA range: start 0x0 length 0x1000 00:18:06.043 Nvme1n1 : 10.02 5723.95 44.72 0.00 0.00 22299.69 4150.61 32039.82 00:18:06.043 =================================================================================================================== 00:18:06.043 Total : 5723.95 44.72 0.00 0.00 22299.69 4150.61 32039.82 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=781631 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:06.301 { 00:18:06.301 "params": { 00:18:06.301 "name": "Nvme$subsystem", 00:18:06.301 "trtype": "$TEST_TRANSPORT", 00:18:06.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.301 "adrfam": "ipv4", 00:18:06.301 "trsvcid": "$NVMF_PORT", 00:18:06.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.301 "hdgst": ${hdgst:-false}, 00:18:06.301 "ddgst": ${ddgst:-false} 00:18:06.301 }, 00:18:06.301 "method": "bdev_nvme_attach_controller" 00:18:06.301 } 00:18:06.301 EOF 00:18:06.301 )") 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:06.301 [2024-07-24 23:58:11.951341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.951402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:06.301 23:58:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:06.301 "params": { 00:18:06.301 "name": "Nvme1", 00:18:06.301 "trtype": "tcp", 00:18:06.301 "traddr": "10.0.0.2", 00:18:06.301 "adrfam": "ipv4", 00:18:06.301 "trsvcid": "4420", 00:18:06.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.301 "hdgst": false, 00:18:06.301 "ddgst": false 00:18:06.301 }, 00:18:06.301 "method": "bdev_nvme_attach_controller" 00:18:06.301 }' 00:18:06.301 [2024-07-24 23:58:11.959292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.959316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:11.967311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.967333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:11.975331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.975352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:11.983352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.983373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:11.988964] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:06.301 [2024-07-24 23:58:11.989047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781631 ] 00:18:06.301 [2024-07-24 23:58:11.991374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.991412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:11.999408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:11.999429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:12.007429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:12.007449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 [2024-07-24 23:58:12.015455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.301 [2024-07-24 23:58:12.015479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.301 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.560 [2024-07-24 23:58:12.023495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.023517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.031507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.031528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.039525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.039546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.047546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.047566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.048700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.560 [2024-07-24 23:58:12.055608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.055640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.063636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.063674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.071614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.071635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.079635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.079656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.087660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.087683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.095682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.095703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.103743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.103776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.111744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.111775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.119743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.119764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.127764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.127785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.135786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.135807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.142647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.560 [2024-07-24 23:58:12.143808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.143829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.151832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.151852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.159892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.159927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.167917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.167952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.175939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.175978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.183958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.183995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.191982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.192023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.200003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.200041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.208022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.208060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.216005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.216027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.224061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.224132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.232100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.232139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.240100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.240139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.248110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.248132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.256128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.256150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.264165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.264191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.560 [2024-07-24 23:58:12.272186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.560 [2024-07-24 23:58:12.272212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.280189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.280214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.288215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.288255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.296249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.296274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.304253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.304277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.312271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.312294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.320292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.320315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.328317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.328340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.336336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.336370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.344366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.344405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.352400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.352422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.360418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.360440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.368438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.368472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.376473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.376520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.384498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.384520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.392518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.392541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.400550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.400571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.408555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.408576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.416578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.416599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.424601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.424623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.432626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.432648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.482386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.482424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.488854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.488877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 Running I/O for 5 seconds... 00:18:06.819 [2024-07-24 23:58:12.496873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.496893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.512001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.512035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.523373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.523420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.819 [2024-07-24 23:58:12.534646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.819 [2024-07-24 23:58:12.534678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.545609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.545641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.556947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.556979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.568081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.568121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.579099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.579154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.590284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.590313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.601524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.601562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.612549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.612581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.625809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.625840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.636129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.636176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.647033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.647064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.660342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.660369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.671162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.671189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.682447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.682478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.695470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.695501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.705422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.705453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.716383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.716428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.727314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.727342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.738535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.738566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.749576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.749607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.761174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.761202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.772712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.772742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.077 [2024-07-24 23:58:12.784173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.077 [2024-07-24 23:58:12.784201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.334 [2024-07-24 23:58:12.795678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.334 [2024-07-24 23:58:12.795707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.334 [2024-07-24 23:58:12.807162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.334 [2024-07-24 23:58:12.807190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.334 [2024-07-24 23:58:12.818190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.334 [2024-07-24 23:58:12.818218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.829846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.829876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.841025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.841056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.852364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.852392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.863837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.863867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.874720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.874751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.886084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.886123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.898009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.898039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.909284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.909312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.920528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.920559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.931553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.931584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.942404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.942434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.953205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.953233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.964455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.964487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.975580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.975610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.988590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.988621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:12.998325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:12.998353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:13.009813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:13.009843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:13.023156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:13.023184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:13.032878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:13.032909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.335 [2024-07-24 23:58:13.044222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.335 [2024-07-24 23:58:13.044251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.055008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.055039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.066081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.066121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.077164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.077192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.088319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.088346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.101371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.101417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.112028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.112058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.123477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.123507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.134536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.134567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.145802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.145833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.156981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.157012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.167997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.168027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.179537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.179567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.190766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.190797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.202066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.202097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.213670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.213700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.224911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.224942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.236330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.236358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.247886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.247917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.259284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.259312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.270328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.270355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.592 [2024-07-24 23:58:13.281623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.592 [2024-07-24 23:58:13.281654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.593 [2024-07-24 23:58:13.294772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.593 [2024-07-24 23:58:13.294803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.593 [2024-07-24 23:58:13.304430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.593 [2024-07-24 23:58:13.304461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.316415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.316461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.328131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.328175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.339371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.339416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.350377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.350430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.363870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.363901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.374797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.374828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.386967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.386998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.398670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.398700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.412175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.412202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.423062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.423092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.434642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.434674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.445770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.445802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.457160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.457196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.468674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.468706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.480238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.480267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.491626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.491658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.503232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.503260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.516462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.516490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.526564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.526592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.537137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.537165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.547635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.547666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.849 [2024-07-24 23:58:13.559014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.849 [2024-07-24 23:58:13.559044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.570678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.570709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.581948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.581978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.593465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.593497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.604128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.604172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.615344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.615371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.626792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.626822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.638075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.638113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.649408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.649451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.660465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.660496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.673791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.673829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.684398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.684425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.695688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.695718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.707314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.707341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.718619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.718649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.729851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.729881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.743183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.743210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.753532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.753563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.765732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.765763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.776621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.776651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.787743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.787774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.799315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.799343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.810831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.810862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.107 [2024-07-24 23:58:13.822163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.107 [2024-07-24 23:58:13.822190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.833528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.833558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.846891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.846918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.857583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.857614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.868924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.868954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.881714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.881745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.891706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.891744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.903919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.903949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.915320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.915347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.926188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.926216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.937490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.937521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.950485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.950516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.960682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.960713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.972390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.972435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.983728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.983758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:13.996737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:13.996768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.006844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.006875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.019177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.019204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.030440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.030471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.041870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.041901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.053129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.053173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.064149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.064202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.365 [2024-07-24 23:58:14.075553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.365 [2024-07-24 23:58:14.075584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.086565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.086597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.099658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.099689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.109695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.109733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.121386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.121431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.132119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.132163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.142824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.142854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.153531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.153561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.164195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.164222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.175207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.175234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.186371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.186415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.199837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.199867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.210567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.210598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.221159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.221186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.232158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.232186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.243272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.243299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.254360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.254403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.265438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.265466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.276454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.276485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.287652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.287683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.298681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.298711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.309453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.309484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.320469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.320508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.623 [2024-07-24 23:58:14.333667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.623 [2024-07-24 23:58:14.333698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.343984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.344015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.354934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.354966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.365854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.365886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.376836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.376867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.387675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.387706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.398331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.398358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.409201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.409229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.420337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.420365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.433349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.433377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.443091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.881 [2024-07-24 23:58:14.443129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.881 [2024-07-24 23:58:14.455236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.455265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.466507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.466538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.479626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.479657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.490178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.490206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.501283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.501311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.513788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.513818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.523623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.523653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.535281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.535309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.546346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.546373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.557753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.557784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.571262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.571290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.581687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.581719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.882 [2024-07-24 23:58:14.592692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.882 [2024-07-24 23:58:14.592724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.605806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.605838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.615694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.615725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.627115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.627161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.638321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.638350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.649263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.649292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.660284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.660312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.670892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.670920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.681351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.681380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.694513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.694546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.704481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.704513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.716334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.716362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.727570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.727600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.738370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.738397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.748479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.748507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.758811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.758838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.770003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.770033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.781546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.781577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.792487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.792517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.804006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.804036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.814871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.814901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.825831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.825862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.836677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.836708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.140 [2024-07-24 23:58:14.847664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.140 [2024-07-24 23:58:14.847694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.858869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.858899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.872285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.872313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.883204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.883231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.894177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.894205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.905019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.905049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.916290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.916317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.929222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.929249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.939711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.939741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.951430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.951461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.398 [2024-07-24 23:58:14.962382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.398 [2024-07-24 23:58:14.962426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:14.973814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:14.973844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:14.987173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:14.987201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:14.997304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:14.997331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.009257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.009285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.020229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.020257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.033096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.033135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.043707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.043737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.055267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.055294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.066577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.066608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.077569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.077600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.089200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.089233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.399 [2024-07-24 23:58:15.104623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.399 [2024-07-24 23:58:15.104653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.115602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.115633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.127131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.127176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.138064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.138092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.148284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.148311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.158699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.158727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.171062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.171097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.180693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.180721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.191137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.191164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.204615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.204642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.214713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.214740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.225001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.225028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.234955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.234982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.245260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.245287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.255385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.255411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.265569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.657 [2024-07-24 23:58:15.265596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.657 [2024-07-24 23:58:15.275844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.275870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.285916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.285943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.296140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.296167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.306696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.306723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.319407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.319448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.329140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.329167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.339280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.339308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.349702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.349729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.362418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.362452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.658 [2024-07-24 23:58:15.371785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.658 [2024-07-24 23:58:15.371819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.382909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.382937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.393213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.393241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.403611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.403638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.414452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.414479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.425017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.425043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.437071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.437098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.446078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.446112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.458829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.458857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.469135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.469162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.916 [2024-07-24 23:58:15.479402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.916 [2024-07-24 23:58:15.479443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.491957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.491984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.501807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.501834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.512168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.512195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.522507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.522534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.533201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.533228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.543071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.543097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.553216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.553242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.563774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.563801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.573786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.573825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.584279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.584305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.596771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.596798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.608873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.608900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.618163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.618190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.917 [2024-07-24 23:58:15.629253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.917 [2024-07-24 23:58:15.629280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.639813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.639840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.650111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.650138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.660822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.660849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.671577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.671605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.682248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.682275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.692713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.692740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.705295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.705323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.715426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.715454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.726546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.726574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.738992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.739020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.749417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.749445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.759949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.759976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.770769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.770797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.780842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.780878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.791169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.791197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.801981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.802008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.812719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.812747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.825045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.825073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.834880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.834907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.846431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.846462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.857671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.857702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.175 [2024-07-24 23:58:15.868719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.175 [2024-07-24 23:58:15.868749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.176 [2024-07-24 23:58:15.881967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.176 [2024-07-24 23:58:15.881997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.893123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.893167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.903574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.903602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.914299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.914326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.927498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.927529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.938424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.938469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.949714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.949744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.960912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.960942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.972056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.972085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.983051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.983081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:15.994605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:15.994643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.006305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.006333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.017122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.017165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.028551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.028581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.039892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.039921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.050769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.050799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.064086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.064125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.075181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.075209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.086060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.086092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.097364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.097391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.108975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.109005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.120178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.120205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.133154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.133182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.434 [2024-07-24 23:58:16.143413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.434 [2024-07-24 23:58:16.143443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.155598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.155628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.167224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.167251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.179006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.179036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.190077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.190116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.201429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.201460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.212651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.212681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.223731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.223760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.236633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.236663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.247659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.247689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.259019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.259048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.270276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.270303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.281811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.281840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.292959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.292989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.304087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.304126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.315519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.315550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.326613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.326643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.339481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.339511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.349429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.349474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.360675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.360705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.371959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.371989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.383823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.383854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.395783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.395814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.693 [2024-07-24 23:58:16.407060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.693 [2024-07-24 23:58:16.407090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.418289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.418317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.429300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.429327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.440524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.440555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.451627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.451658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.462805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.462836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.474060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.474090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.487288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.487315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.497788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.497818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.509479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.509510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.520647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.520677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.534025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.534055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.544651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.544681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.555927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.555956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.567290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.567318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.578580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.578611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.589462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.589493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.600406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.600437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.951 [2024-07-24 23:58:16.613577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.951 [2024-07-24 23:58:16.613607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.952 [2024-07-24 23:58:16.623800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.952 [2024-07-24 23:58:16.623830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.952 [2024-07-24 23:58:16.635721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.952 [2024-07-24 23:58:16.635751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.952 [2024-07-24 23:58:16.646451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.952 [2024-07-24 23:58:16.646483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.952 [2024-07-24 23:58:16.657554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.952 [2024-07-24 23:58:16.657585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.668620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.668650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.681774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.681805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.691955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.691985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.702719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.702749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.715626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.715656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.725983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.726013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.737247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.737274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.750310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.750337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.760779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.760809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.771653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.771683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.782615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.782645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.795523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.795553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.805485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.805515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.817099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.817152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.827797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.827839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.838212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.838245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.848442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.848476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.858558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.858586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.872458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.872489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.882619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.882648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.893046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.893075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.903725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.903754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.913967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.913996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.210 [2024-07-24 23:58:16.924181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.210 [2024-07-24 23:58:16.924210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.934570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.934598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.945126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.945154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.957441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.957469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.967163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.967191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.978066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.978094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.988254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.988282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:16.998557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:16.998584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.009137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.009164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.021536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.021563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.033358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.033385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.042285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.042313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.053114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.053148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.063411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.063438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.073881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.073907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.085052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.085079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.095408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.095435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.105973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.106000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.118178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.118204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.127814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.127841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.138836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.138863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.149250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.149277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.159558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.159585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.169847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.169874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.468 [2024-07-24 23:58:17.180124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.468 [2024-07-24 23:58:17.180151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.726 [2024-07-24 23:58:17.190856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.726 [2024-07-24 23:58:17.190883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.726 [2024-07-24 23:58:17.201160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.726 [2024-07-24 23:58:17.201187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.726 [2024-07-24 23:58:17.211832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.726 [2024-07-24 23:58:17.211859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.726 [2024-07-24 23:58:17.222016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.726 [2024-07-24 23:58:17.222043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.726 [2024-07-24 23:58:17.232266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.232292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.242479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.242505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.252713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.252748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.263116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.263142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.273463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.273490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.283875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.283901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.293862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.293888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.304300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.304326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.317268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.317295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.328574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.328601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.337967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.337995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.348714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.348741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.360872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.360899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.370586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.370613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.381146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.381172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.391500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.391528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.403783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.403811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.413888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.413915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.424024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.424051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.727 [2024-07-24 23:58:17.434547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.727 [2024-07-24 23:58:17.434574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.447307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.447334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.457114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.457150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.467430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.467458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.477727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.477754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.487870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.487897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.498583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.498609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.509442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.509469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.516798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.516823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 00:18:11.985 Latency(us) 00:18:11.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.985 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:11.985 Nvme1n1 : 5.01 11591.06 90.56 0.00 0.00 11029.68 4514.70 21942.42 00:18:11.985 =================================================================================================================== 00:18:11.985 Total : 11591.06 90.56 0.00 0.00 11029.68 4514.70 21942.42 00:18:11.985 [2024-07-24 23:58:17.524803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.524841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.532821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.532844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.540923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.540972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.548939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.548983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.556955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.557001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.564977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.565019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.572993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.573040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.581022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.581067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.589055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.589113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.597054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.597095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.605086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.605155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.613099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.613164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.621173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.621220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.629159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.629207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.637180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.637226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.645199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.645245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.653218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.653262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.985 [2024-07-24 23:58:17.661215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.985 [2024-07-24 23:58:17.661253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.986 [2024-07-24 23:58:17.669226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.986 [2024-07-24 23:58:17.669248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.986 [2024-07-24 23:58:17.677280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.986 [2024-07-24 23:58:17.677323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.986 [2024-07-24 23:58:17.685295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.986 [2024-07-24 23:58:17.685340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.986 [2024-07-24 23:58:17.693351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.986 [2024-07-24 23:58:17.693402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.701307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.701335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.709335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.709367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.717408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.717457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.725426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.725474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.733407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.733433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.741396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.741417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 [2024-07-24 23:58:17.749437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.244 [2024-07-24 23:58:17.749462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (781631) - No such process 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 781631 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.244 delay0 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.244 23:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:12.244 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.244 [2024-07-24 23:58:17.868259] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:20.350 Initializing NVMe Controllers 00:18:20.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:20.350 Initialization complete. Launching workers. 00:18:20.350 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 223, failed: 18236 00:18:20.350 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18303, failed to submit 156 00:18:20.350 success 18243, unsuccess 60, failed 0 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.350 rmmod nvme_tcp 00:18:20.350 rmmod nvme_fabrics 00:18:20.350 rmmod nvme_keyring 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 780295 ']' 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 780295 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 780295 ']' 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 780295 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.350 23:58:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 780295 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 780295' 00:18:20.350 killing process with pid 780295 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 780295 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 780295 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.350 23:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.725 23:58:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.725 00:18:21.725 real 0m28.727s 00:18:21.725 user 0m39.328s 00:18:21.725 sys 0m9.419s 00:18:21.725 23:58:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:21.725 23:58:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.725 ************************************ 00:18:21.725 END TEST nvmf_zcopy 00:18:21.725 ************************************ 00:18:21.725 23:58:27 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:21.725 23:58:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:21.725 23:58:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:21.725 23:58:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.725 ************************************ 00:18:21.725 START TEST nvmf_nmic 00:18:21.725 ************************************ 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:21.725 * Looking for test storage... 00:18:21.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.725 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.726 23:58:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:24.303 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:24.303 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:24.303 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:24.304 Found net devices under 0000:09:00.0: cvl_0_0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:24.304 Found net devices under 0000:09:00.1: cvl_0_1 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:18:24.304 00:18:24.304 --- 10.0.0.2 ping statistics --- 00:18:24.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.304 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:18:24.304 00:18:24.304 --- 10.0.0.1 ping statistics --- 00:18:24.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.304 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=785133 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 785133 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 785133 ']' 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.304 [2024-07-24 23:58:29.683315] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:24.304 [2024-07-24 23:58:29.683405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.304 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.304 [2024-07-24 23:58:29.747935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.304 [2024-07-24 23:58:29.837972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.304 [2024-07-24 23:58:29.838026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.304 [2024-07-24 23:58:29.838040] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.304 [2024-07-24 23:58:29.838051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.304 [2024-07-24 23:58:29.838060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.304 [2024-07-24 23:58:29.838143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.304 [2024-07-24 23:58:29.838206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.304 [2024-07-24 23:58:29.838272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.304 [2024-07-24 23:58:29.838275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.304 23:58:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.304 [2024-07-24 23:58:29.996926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.304 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.304 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.304 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.304 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.562 Malloc0 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.562 [2024-07-24 23:58:30.053715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:24.562 test case1: single bdev can't be used in multiple subsystems 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.562 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.563 [2024-07-24 23:58:30.077516] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:24.563 [2024-07-24 23:58:30.077554] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:24.563 [2024-07-24 23:58:30.077590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.563 request: 00:18:24.563 { 00:18:24.563 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:24.563 "namespace": { 00:18:24.563 "bdev_name": "Malloc0", 00:18:24.563 "no_auto_visible": false 00:18:24.563 }, 00:18:24.563 "method": "nvmf_subsystem_add_ns", 00:18:24.563 "req_id": 1 00:18:24.563 } 00:18:24.563 Got JSON-RPC error response 00:18:24.563 response: 00:18:24.563 { 00:18:24.563 "code": -32602, 00:18:24.563 "message": "Invalid parameters" 00:18:24.563 } 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:24.563 Adding namespace failed - expected result. 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:24.563 test case2: host connect to nvmf target in multiple paths 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.563 [2024-07-24 23:58:30.085660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.563 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.128 23:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:25.693 23:58:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:25.693 23:58:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:25.693 23:58:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.693 23:58:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:25.693 23:58:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:27.587 23:58:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:27.844 [global] 00:18:27.844 thread=1 00:18:27.844 invalidate=1 00:18:27.844 rw=write 00:18:27.844 time_based=1 00:18:27.844 runtime=1 00:18:27.844 ioengine=libaio 00:18:27.844 direct=1 00:18:27.844 bs=4096 00:18:27.844 iodepth=1 00:18:27.844 norandommap=0 00:18:27.844 numjobs=1 00:18:27.844 00:18:27.844 verify_dump=1 00:18:27.844 verify_backlog=512 00:18:27.844 verify_state_save=0 00:18:27.844 do_verify=1 00:18:27.844 verify=crc32c-intel 00:18:27.844 [job0] 00:18:27.844 filename=/dev/nvme0n1 00:18:27.844 Could not set queue depth (nvme0n1) 00:18:27.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:27.844 fio-3.35 00:18:27.844 Starting 1 thread 00:18:29.212 00:18:29.212 job0: (groupid=0, jobs=1): err= 0: pid=785646: Wed Jul 24 23:58:34 2024 00:18:29.212 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:29.212 slat (nsec): min=5547, max=69868, avg=20284.05, stdev=10262.47 00:18:29.212 clat (usec): min=264, max=1337, avg=329.56, stdev=51.55 00:18:29.212 lat (usec): min=271, max=1345, avg=349.84, stdev=55.58 00:18:29.212 clat percentiles (usec): 00:18:29.212 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:18:29.212 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 334], 00:18:29.212 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 383], 00:18:29.212 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 1156], 99.95th=[ 1336], 00:18:29.212 | 99.99th=[ 1336] 00:18:29.212 write: IOPS=1842, BW=7369KiB/s (7545kB/s)(7376KiB/1001msec); 0 zone resets 00:18:29.212 slat (usec): min=5, max=28777, avg=31.26, stdev=669.85 00:18:29.212 clat (usec): min=168, max=1094, avg=210.30, stdev=41.79 00:18:29.212 lat (usec): min=175, max=29036, avg=241.57, stdev=672.45 00:18:29.212 clat percentiles (usec): 00:18:29.212 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:18:29.212 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:18:29.212 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 273], 00:18:29.212 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 1074], 99.95th=[ 1090], 00:18:29.212 | 99.99th=[ 1090] 00:18:29.212 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:29.212 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:29.212 lat (usec) : 250=49.35%, 500=50.44%, 750=0.06% 00:18:29.212 lat (msec) : 2=0.15% 00:18:29.212 cpu : usr=2.80%, sys=6.80%, ctx=3383, majf=0, minf=2 00:18:29.212 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.212 issued rwts: total=1536,1844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.212 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.212 00:18:29.212 Run status group 0 (all jobs): 00:18:29.212 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:29.212 WRITE: bw=7369KiB/s (7545kB/s), 7369KiB/s-7369KiB/s (7545kB/s-7545kB/s), io=7376KiB (7553kB), run=1001-1001msec 00:18:29.212 00:18:29.212 Disk stats (read/write): 00:18:29.212 nvme0n1: ios=1505/1536, merge=0/0, ticks=1472/300, in_queue=1772, util=98.60% 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.213 rmmod nvme_tcp 00:18:29.213 rmmod nvme_fabrics 00:18:29.213 rmmod nvme_keyring 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 785133 ']' 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 785133 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 785133 ']' 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 785133 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 785133 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 785133' 00:18:29.213 killing process with pid 785133 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 785133 00:18:29.213 23:58:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 785133 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.470 23:58:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.001 23:58:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.001 00:18:32.001 real 0m9.883s 00:18:32.001 user 0m22.056s 00:18:32.001 sys 0m2.491s 00:18:32.001 23:58:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:32.001 23:58:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:32.001 ************************************ 00:18:32.001 END TEST nvmf_nmic 00:18:32.001 ************************************ 00:18:32.001 23:58:37 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:32.001 23:58:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:32.001 23:58:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:32.001 23:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:32.001 ************************************ 00:18:32.001 START TEST nvmf_fio_target 00:18:32.001 ************************************ 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:32.001 * Looking for test storage... 00:18:32.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.001 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.002 23:58:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:33.900 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:33.900 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.900 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:33.901 Found net devices under 0000:09:00.0: cvl_0_0 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:33.901 Found net devices under 0000:09:00.1: cvl_0_1 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:18:33.901 00:18:33.901 --- 10.0.0.2 ping statistics --- 00:18:33.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.901 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:18:33.901 00:18:33.901 --- 10.0.0.1 ping statistics --- 00:18:33.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.901 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=787714 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 787714 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 787714 ']' 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.901 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.901 [2024-07-24 23:58:39.358722] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:33.901 [2024-07-24 23:58:39.358811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.901 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.901 [2024-07-24 23:58:39.426955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.901 [2024-07-24 23:58:39.521434] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.901 [2024-07-24 23:58:39.521493] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.901 [2024-07-24 23:58:39.521509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.901 [2024-07-24 23:58:39.521522] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.901 [2024-07-24 23:58:39.521534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.901 [2024-07-24 23:58:39.521616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.901 [2024-07-24 23:58:39.521672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.901 [2024-07-24 23:58:39.521723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.901 [2024-07-24 23:58:39.521726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.158 23:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:34.416 [2024-07-24 23:58:39.951044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.416 23:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.674 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:34.674 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.932 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:34.932 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.189 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:35.189 23:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.447 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:35.447 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:35.704 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.962 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:35.962 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.219 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:36.219 23:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.476 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:36.476 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:36.734 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:36.991 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:36.991 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.248 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:37.248 23:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.505 23:58:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.763 [2024-07-24 23:58:43.283035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.763 23:58:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:38.020 23:58:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:38.278 23:58:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:38.843 23:58:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:41.368 23:58:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:41.368 [global] 00:18:41.368 thread=1 00:18:41.368 invalidate=1 00:18:41.368 rw=write 00:18:41.368 time_based=1 00:18:41.368 runtime=1 00:18:41.368 ioengine=libaio 00:18:41.368 direct=1 00:18:41.368 bs=4096 00:18:41.368 iodepth=1 00:18:41.369 norandommap=0 00:18:41.369 numjobs=1 00:18:41.369 00:18:41.369 verify_dump=1 00:18:41.369 verify_backlog=512 00:18:41.369 verify_state_save=0 00:18:41.369 do_verify=1 00:18:41.369 verify=crc32c-intel 00:18:41.369 [job0] 00:18:41.369 filename=/dev/nvme0n1 00:18:41.369 [job1] 00:18:41.369 filename=/dev/nvme0n2 00:18:41.369 [job2] 00:18:41.369 filename=/dev/nvme0n3 00:18:41.369 [job3] 00:18:41.369 filename=/dev/nvme0n4 00:18:41.369 Could not set queue depth (nvme0n1) 00:18:41.369 Could not set queue depth (nvme0n2) 00:18:41.369 Could not set queue depth (nvme0n3) 00:18:41.369 Could not set queue depth (nvme0n4) 00:18:41.369 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.369 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.369 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.369 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.369 fio-3.35 00:18:41.369 Starting 4 threads 00:18:42.302 00:18:42.302 job0: (groupid=0, jobs=1): err= 0: pid=788778: Wed Jul 24 23:58:47 2024 00:18:42.302 read: IOPS=914, BW=3657KiB/s (3745kB/s)(3668KiB/1003msec) 00:18:42.302 slat (nsec): min=5413, max=53765, avg=22667.92, stdev=10666.10 00:18:42.302 clat (usec): min=284, max=42020, avg=687.37, stdev=3557.09 00:18:42.302 lat (usec): min=290, max=42033, avg=710.04, stdev=3557.06 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:18:42.302 | 30.00th=[ 334], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:18:42.302 | 70.00th=[ 383], 80.00th=[ 441], 90.00th=[ 478], 95.00th=[ 498], 00:18:42.302 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:42.302 | 99.99th=[42206] 00:18:42.302 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:18:42.302 slat (nsec): min=6556, max=72648, avg=22908.96, stdev=11194.74 00:18:42.302 clat (usec): min=237, max=542, avg=307.62, stdev=51.35 00:18:42.302 lat (usec): min=245, max=551, avg=330.52, stdev=51.19 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:18:42.302 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 314], 00:18:42.302 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 408], 00:18:42.302 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 510], 99.95th=[ 545], 00:18:42.302 | 99.99th=[ 545] 00:18:42.302 bw ( KiB/s): min= 2520, max= 5672, per=29.43%, avg=4096.00, stdev=2228.80, samples=2 00:18:42.302 iops : min= 630, max= 1418, avg=1024.00, stdev=557.20, samples=2 00:18:42.302 lat (usec) : 250=1.75%, 500=96.08%, 750=1.80% 00:18:42.302 lat (msec) : 50=0.36% 00:18:42.302 cpu : usr=2.20%, sys=4.79%, ctx=1943, majf=0, minf=1 00:18:42.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 issued rwts: total=917,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.302 job1: (groupid=0, jobs=1): err= 0: pid=788779: Wed Jul 24 23:58:47 2024 00:18:42.302 read: IOPS=505, BW=2024KiB/s (2072kB/s)(2064KiB/1020msec) 00:18:42.302 slat (nsec): min=5019, max=68424, avg=18485.39, stdev=8160.08 00:18:42.302 clat (usec): min=265, max=41180, avg=1380.80, stdev=6362.15 00:18:42.302 lat (usec): min=285, max=41204, avg=1399.28, stdev=6363.04 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:18:42.302 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:18:42.302 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 461], 95.00th=[ 502], 00:18:42.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:42.302 | 99.99th=[41157] 00:18:42.302 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:18:42.302 slat (nsec): min=6935, max=73691, avg=15817.95, stdev=9273.08 00:18:42.302 clat (usec): min=177, max=1275, avg=267.51, stdev=100.04 00:18:42.302 lat (usec): min=189, max=1316, avg=283.33, stdev=104.31 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:18:42.302 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 225], 60.00th=[ 247], 00:18:42.302 | 70.00th=[ 293], 80.00th=[ 347], 90.00th=[ 416], 95.00th=[ 453], 00:18:42.302 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 1270], 00:18:42.302 | 99.99th=[ 1270] 00:18:42.302 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=2 00:18:42.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:42.302 lat (usec) : 250=40.97%, 500=55.19%, 750=2.92% 00:18:42.302 lat (msec) : 2=0.06%, 50=0.84% 00:18:42.302 cpu : usr=1.47%, sys=3.04%, ctx=1543, majf=0, minf=1 00:18:42.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.302 job2: (groupid=0, jobs=1): err= 0: pid=788786: Wed Jul 24 23:58:47 2024 00:18:42.302 read: IOPS=20, BW=81.9KiB/s (83.8kB/s)(84.0KiB/1026msec) 00:18:42.302 slat (nsec): min=11226, max=44215, avg=28734.90, stdev=9911.89 00:18:42.302 clat (usec): min=40705, max=41058, avg=40950.71, stdev=65.86 00:18:42.302 lat (usec): min=40716, max=41075, avg=40979.45, stdev=67.40 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:42.302 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:42.302 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:42.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:42.302 | 99.99th=[41157] 00:18:42.302 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:18:42.302 slat (nsec): min=6998, max=70743, avg=19945.49, stdev=10456.91 00:18:42.302 clat (usec): min=201, max=556, avg=297.29, stdev=57.43 00:18:42.302 lat (usec): min=209, max=596, avg=317.24, stdev=59.26 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:18:42.302 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:18:42.302 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 383], 95.00th=[ 412], 00:18:42.302 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 553], 99.95th=[ 553], 00:18:42.302 | 99.99th=[ 553] 00:18:42.302 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.302 lat (usec) : 250=19.89%, 500=75.98%, 750=0.19% 00:18:42.302 lat (msec) : 50=3.94% 00:18:42.302 cpu : usr=0.68%, sys=0.98%, ctx=534, majf=0, minf=2 00:18:42.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.302 job3: (groupid=0, jobs=1): err= 0: pid=788787: Wed Jul 24 23:58:47 2024 00:18:42.302 read: IOPS=593, BW=2373KiB/s (2430kB/s)(2444KiB/1030msec) 00:18:42.302 slat (nsec): min=6608, max=68684, avg=20405.93, stdev=8809.69 00:18:42.302 clat (usec): min=311, max=41566, avg=1181.43, stdev=5659.02 00:18:42.302 lat (usec): min=327, max=41583, avg=1201.84, stdev=5659.10 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:18:42.302 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:18:42.302 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 457], 95.00th=[ 529], 00:18:42.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:42.302 | 99.99th=[41681] 00:18:42.302 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:18:42.302 slat (nsec): min=7673, max=57842, avg=20469.62, stdev=8693.26 00:18:42.302 clat (usec): min=196, max=705, avg=258.71, stdev=49.25 00:18:42.302 lat (usec): min=205, max=721, avg=279.18, stdev=53.94 00:18:42.302 clat percentiles (usec): 00:18:42.302 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:18:42.302 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:18:42.302 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 355], 00:18:42.302 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 701], 99.95th=[ 709], 00:18:42.302 | 99.99th=[ 709] 00:18:42.302 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=2 00:18:42.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:42.302 lat (usec) : 250=35.29%, 500=61.90%, 750=2.02%, 1000=0.06% 00:18:42.302 lat (msec) : 50=0.73% 00:18:42.302 cpu : usr=2.24%, sys=4.08%, ctx=1637, majf=0, minf=1 00:18:42.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.302 issued rwts: total=611,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.302 00:18:42.302 Run status group 0 (all jobs): 00:18:42.302 READ: bw=8019KiB/s (8212kB/s), 81.9KiB/s-3657KiB/s (83.8kB/s-3745kB/s), io=8260KiB (8458kB), run=1003-1030msec 00:18:42.303 WRITE: bw=13.6MiB/s (14.3MB/s), 1996KiB/s-4084KiB/s (2044kB/s-4182kB/s), io=14.0MiB (14.7MB), run=1003-1030msec 00:18:42.303 00:18:42.303 Disk stats (read/write): 00:18:42.303 nvme0n1: ios=936/1024, merge=0/0, ticks=1303/300, in_queue=1603, util=85.37% 00:18:42.303 nvme0n2: ios=561/988, merge=0/0, ticks=851/258, in_queue=1109, util=89.22% 00:18:42.303 nvme0n3: ios=68/512, merge=0/0, ticks=796/147, in_queue=943, util=94.87% 00:18:42.303 nvme0n4: ios=659/1024, merge=0/0, ticks=621/249, in_queue=870, util=95.67% 00:18:42.303 23:58:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:42.303 [global] 00:18:42.303 thread=1 00:18:42.303 invalidate=1 00:18:42.303 rw=randwrite 00:18:42.303 time_based=1 00:18:42.303 runtime=1 00:18:42.303 ioengine=libaio 00:18:42.303 direct=1 00:18:42.303 bs=4096 00:18:42.303 iodepth=1 00:18:42.303 norandommap=0 00:18:42.303 numjobs=1 00:18:42.303 00:18:42.303 verify_dump=1 00:18:42.303 verify_backlog=512 00:18:42.303 verify_state_save=0 00:18:42.303 do_verify=1 00:18:42.303 verify=crc32c-intel 00:18:42.303 [job0] 00:18:42.303 filename=/dev/nvme0n1 00:18:42.303 [job1] 00:18:42.303 filename=/dev/nvme0n2 00:18:42.303 [job2] 00:18:42.303 filename=/dev/nvme0n3 00:18:42.303 [job3] 00:18:42.303 filename=/dev/nvme0n4 00:18:42.303 Could not set queue depth (nvme0n1) 00:18:42.303 Could not set queue depth (nvme0n2) 00:18:42.303 Could not set queue depth (nvme0n3) 00:18:42.303 Could not set queue depth (nvme0n4) 00:18:42.560 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.560 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.560 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.560 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.560 fio-3.35 00:18:42.560 Starting 4 threads 00:18:43.974 00:18:43.974 job0: (groupid=0, jobs=1): err= 0: pid=789011: Wed Jul 24 23:58:49 2024 00:18:43.974 read: IOPS=513, BW=2053KiB/s (2102kB/s)(2100KiB/1023msec) 00:18:43.974 slat (nsec): min=5415, max=34979, avg=6843.27, stdev=4191.40 00:18:43.974 clat (usec): min=381, max=41044, avg=1463.17, stdev=6299.29 00:18:43.974 lat (usec): min=388, max=41076, avg=1470.01, stdev=6302.84 00:18:43.974 clat percentiles (usec): 00:18:43.974 | 1.00th=[ 420], 5.00th=[ 433], 10.00th=[ 437], 20.00th=[ 445], 00:18:43.974 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 461], 00:18:43.974 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 515], 00:18:43.974 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:43.974 | 99.99th=[41157] 00:18:43.974 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:18:43.974 slat (nsec): min=6314, max=39848, avg=8380.54, stdev=2815.56 00:18:43.974 clat (usec): min=171, max=2682, avg=232.66, stdev=82.93 00:18:43.974 lat (usec): min=179, max=2692, avg=241.05, stdev=83.03 00:18:43.974 clat percentiles (usec): 00:18:43.974 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 202], 00:18:43.974 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:18:43.975 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 281], 00:18:43.975 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 498], 99.95th=[ 2671], 00:18:43.975 | 99.99th=[ 2671] 00:18:43.975 bw ( KiB/s): min= 8192, max= 8192, per=82.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:43.975 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:43.975 lat (usec) : 250=55.78%, 500=41.38%, 750=1.94% 00:18:43.975 lat (msec) : 4=0.06%, 50=0.84% 00:18:43.975 cpu : usr=0.39%, sys=2.05%, ctx=1550, majf=0, minf=1 00:18:43.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.975 job1: (groupid=0, jobs=1): err= 0: pid=789012: Wed Jul 24 23:58:49 2024 00:18:43.975 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:18:43.975 slat (nsec): min=8419, max=37552, avg=22779.09, stdev=11238.13 00:18:43.975 clat (usec): min=40733, max=44061, avg=41100.60, stdev=664.30 00:18:43.975 lat (usec): min=40741, max=44094, avg=41123.38, stdev=666.65 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:43.975 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:43.975 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:43.975 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:18:43.975 | 99.99th=[44303] 00:18:43.975 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:43.975 slat (nsec): min=7698, max=27616, avg=9141.59, stdev=2220.89 00:18:43.975 clat (usec): min=191, max=297, avg=221.45, stdev=16.95 00:18:43.975 lat (usec): min=200, max=306, avg=230.59, stdev=17.36 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:18:43.975 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:18:43.975 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 253], 00:18:43.975 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 297], 00:18:43.975 | 99.99th=[ 297] 00:18:43.975 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.975 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.975 lat (usec) : 250=90.64%, 500=5.24% 00:18:43.975 lat (msec) : 50=4.12% 00:18:43.975 cpu : usr=0.29%, sys=0.68%, ctx=535, majf=0, minf=2 00:18:43.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.975 job2: (groupid=0, jobs=1): err= 0: pid=789013: Wed Jul 24 23:58:49 2024 00:18:43.975 read: IOPS=35, BW=142KiB/s (146kB/s)(144KiB/1013msec) 00:18:43.975 slat (nsec): min=5720, max=35306, avg=18230.89, stdev=9458.83 00:18:43.975 clat (usec): min=289, max=42217, avg=24445.74, stdev=20643.75 00:18:43.975 lat (usec): min=296, max=42232, avg=24463.97, stdev=20648.29 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 338], 00:18:43.975 | 30.00th=[ 383], 40.00th=[ 611], 50.00th=[41157], 60.00th=[41157], 00:18:43.975 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:43.975 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:43.975 | 99.99th=[42206] 00:18:43.975 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:18:43.975 slat (nsec): min=6514, max=37403, avg=8776.70, stdev=3344.34 00:18:43.975 clat (usec): min=187, max=421, avg=245.30, stdev=38.85 00:18:43.975 lat (usec): min=195, max=429, avg=254.08, stdev=39.45 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:18:43.975 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:18:43.975 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 375], 00:18:43.975 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 420], 99.95th=[ 420], 00:18:43.975 | 99.99th=[ 420] 00:18:43.975 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.975 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.975 lat (usec) : 250=73.54%, 500=22.45%, 750=0.18% 00:18:43.975 lat (msec) : 50=3.83% 00:18:43.975 cpu : usr=0.20%, sys=0.49%, ctx=549, majf=0, minf=1 00:18:43.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.975 job3: (groupid=0, jobs=1): err= 0: pid=789014: Wed Jul 24 23:58:49 2024 00:18:43.975 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:18:43.975 slat (nsec): min=6556, max=34561, avg=21236.57, stdev=10639.05 00:18:43.975 clat (usec): min=459, max=42039, avg=38696.84, stdev=10641.06 00:18:43.975 lat (usec): min=473, max=42073, avg=38718.07, stdev=10643.27 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[ 461], 5.00th=[10159], 10.00th=[41157], 20.00th=[41681], 00:18:43.975 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:43.975 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:43.975 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:43.975 | 99.99th=[42206] 00:18:43.975 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:43.975 slat (nsec): min=6544, max=33023, avg=9016.04, stdev=3356.24 00:18:43.975 clat (usec): min=192, max=451, avg=249.54, stdev=41.97 00:18:43.975 lat (usec): min=199, max=460, avg=258.56, stdev=42.78 00:18:43.975 clat percentiles (usec): 00:18:43.975 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:18:43.975 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:18:43.975 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 293], 95.00th=[ 388], 00:18:43.975 | 99.00th=[ 396], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 453], 00:18:43.975 | 99.99th=[ 453] 00:18:43.975 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.975 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.975 lat (usec) : 250=68.41%, 500=27.48% 00:18:43.975 lat (msec) : 20=0.19%, 50=3.93% 00:18:43.975 cpu : usr=0.10%, sys=0.49%, ctx=539, majf=0, minf=1 00:18:43.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.975 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.975 00:18:43.975 Run status group 0 (all jobs): 00:18:43.975 READ: bw=2365KiB/s (2422kB/s), 85.9KiB/s-2053KiB/s (87.9kB/s-2102kB/s), io=2424KiB (2482kB), run=1013-1025msec 00:18:43.975 WRITE: bw=9990KiB/s (10.2MB/s), 1998KiB/s-4004KiB/s (2046kB/s-4100kB/s), io=10.0MiB (10.5MB), run=1013-1025msec 00:18:43.975 00:18:43.975 Disk stats (read/write): 00:18:43.975 nvme0n1: ios=570/1024, merge=0/0, ticks=589/223, in_queue=812, util=87.47% 00:18:43.975 nvme0n2: ios=67/512, merge=0/0, ticks=907/111, in_queue=1018, util=94.11% 00:18:43.975 nvme0n3: ios=55/512, merge=0/0, ticks=1636/120, in_queue=1756, util=98.12% 00:18:43.975 nvme0n4: ios=44/512, merge=0/0, ticks=1627/130, in_queue=1757, util=97.69% 00:18:43.975 23:58:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:43.975 [global] 00:18:43.975 thread=1 00:18:43.975 invalidate=1 00:18:43.975 rw=write 00:18:43.975 time_based=1 00:18:43.975 runtime=1 00:18:43.975 ioengine=libaio 00:18:43.975 direct=1 00:18:43.975 bs=4096 00:18:43.975 iodepth=128 00:18:43.975 norandommap=0 00:18:43.975 numjobs=1 00:18:43.975 00:18:43.975 verify_dump=1 00:18:43.975 verify_backlog=512 00:18:43.975 verify_state_save=0 00:18:43.975 do_verify=1 00:18:43.975 verify=crc32c-intel 00:18:43.975 [job0] 00:18:43.975 filename=/dev/nvme0n1 00:18:43.975 [job1] 00:18:43.975 filename=/dev/nvme0n2 00:18:43.975 [job2] 00:18:43.975 filename=/dev/nvme0n3 00:18:43.975 [job3] 00:18:43.975 filename=/dev/nvme0n4 00:18:43.975 Could not set queue depth (nvme0n1) 00:18:43.975 Could not set queue depth (nvme0n2) 00:18:43.975 Could not set queue depth (nvme0n3) 00:18:43.975 Could not set queue depth (nvme0n4) 00:18:43.975 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.975 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.975 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.975 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.975 fio-3.35 00:18:43.975 Starting 4 threads 00:18:45.351 00:18:45.351 job0: (groupid=0, jobs=1): err= 0: pid=789241: Wed Jul 24 23:58:50 2024 00:18:45.351 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:18:45.351 slat (usec): min=2, max=50772, avg=177.87, stdev=1566.12 00:18:45.351 clat (msec): min=7, max=125, avg=24.78, stdev=20.32 00:18:45.351 lat (msec): min=7, max=125, avg=24.96, stdev=20.46 00:18:45.351 clat percentiles (msec): 00:18:45.351 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:18:45.351 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 19], 60.00th=[ 22], 00:18:45.351 | 70.00th=[ 25], 80.00th=[ 33], 90.00th=[ 45], 95.00th=[ 81], 00:18:45.351 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 115], 00:18:45.351 | 99.99th=[ 127] 00:18:45.351 write: IOPS=3187, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1002msec); 0 zone resets 00:18:45.351 slat (usec): min=3, max=13421, avg=135.29, stdev=853.53 00:18:45.351 clat (usec): min=278, max=88254, avg=15864.22, stdev=11617.66 00:18:45.351 lat (usec): min=3476, max=88258, avg=15999.52, stdev=11708.32 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 4178], 5.00th=[ 7570], 10.00th=[ 9372], 20.00th=[10028], 00:18:45.351 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12911], 60.00th=[13566], 00:18:45.351 | 70.00th=[14484], 80.00th=[16188], 90.00th=[24249], 95.00th=[41681], 00:18:45.351 | 99.00th=[73925], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:18:45.351 | 99.99th=[88605] 00:18:45.351 bw ( KiB/s): min=10008, max=14568, per=19.92%, avg=12288.00, stdev=3224.41, samples=2 00:18:45.351 iops : min= 2502, max= 3642, avg=3072.00, stdev=806.10, samples=2 00:18:45.351 lat (usec) : 500=0.02% 00:18:45.351 lat (msec) : 4=0.38%, 10=11.57%, 20=60.77%, 50=22.36%, 100=3.54% 00:18:45.351 lat (msec) : 250=1.36% 00:18:45.351 cpu : usr=2.60%, sys=4.50%, ctx=284, majf=0, minf=1 00:18:45.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:45.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.351 issued rwts: total=3072,3194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.351 job1: (groupid=0, jobs=1): err= 0: pid=789242: Wed Jul 24 23:58:50 2024 00:18:45.351 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:18:45.351 slat (usec): min=2, max=17339, avg=124.11, stdev=945.77 00:18:45.351 clat (usec): min=409, max=40972, avg=16406.97, stdev=6464.72 00:18:45.351 lat (usec): min=418, max=40980, avg=16531.08, stdev=6503.35 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 4293], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10945], 00:18:45.351 | 30.00th=[11731], 40.00th=[13042], 50.00th=[15008], 60.00th=[17433], 00:18:45.351 | 70.00th=[19006], 80.00th=[21627], 90.00th=[25822], 95.00th=[29492], 00:18:45.351 | 99.00th=[33162], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:45.351 | 99.99th=[41157] 00:18:45.351 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:18:45.351 slat (usec): min=3, max=14375, avg=85.08, stdev=613.99 00:18:45.351 clat (usec): min=788, max=41679, avg=12575.86, stdev=5356.82 00:18:45.351 lat (usec): min=805, max=41690, avg=12660.95, stdev=5387.69 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 1926], 5.00th=[ 5276], 10.00th=[ 6783], 20.00th=[ 9372], 00:18:45.351 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[12256], 00:18:45.351 | 70.00th=[13042], 80.00th=[15270], 90.00th=[20317], 95.00th=[23725], 00:18:45.351 | 99.00th=[28967], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:18:45.351 | 99.99th=[41681] 00:18:45.351 bw ( KiB/s): min=17456, max=19328, per=29.82%, avg=18392.00, stdev=1323.70, samples=2 00:18:45.351 iops : min= 4364, max= 4832, avg=4598.00, stdev=330.93, samples=2 00:18:45.351 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.36% 00:18:45.351 lat (msec) : 2=0.18%, 4=1.38%, 10=14.00%, 20=67.47%, 50=16.56% 00:18:45.351 cpu : usr=3.19%, sys=6.18%, ctx=347, majf=0, minf=1 00:18:45.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:45.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.351 issued rwts: total=4214,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.351 job2: (groupid=0, jobs=1): err= 0: pid=789243: Wed Jul 24 23:58:50 2024 00:18:45.351 read: IOPS=3406, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1002msec) 00:18:45.351 slat (usec): min=3, max=11236, avg=144.99, stdev=815.36 00:18:45.351 clat (usec): min=902, max=54588, avg=19205.73, stdev=10232.83 00:18:45.351 lat (usec): min=7311, max=54603, avg=19350.72, stdev=10277.93 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 7767], 5.00th=[10159], 10.00th=[11338], 20.00th=[12125], 00:18:45.351 | 30.00th=[12649], 40.00th=[13304], 50.00th=[14353], 60.00th=[14746], 00:18:45.351 | 70.00th=[20317], 80.00th=[29230], 90.00th=[36439], 95.00th=[41157], 00:18:45.351 | 99.00th=[47973], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:18:45.351 | 99.99th=[54789] 00:18:45.351 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:18:45.351 slat (usec): min=3, max=13870, avg=133.03, stdev=748.47 00:18:45.351 clat (usec): min=8026, max=51814, avg=16883.86, stdev=8199.87 00:18:45.351 lat (usec): min=8031, max=51822, avg=17016.89, stdev=8252.54 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11600], 20.00th=[11863], 00:18:45.351 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13960], 60.00th=[14746], 00:18:45.351 | 70.00th=[15926], 80.00th=[21103], 90.00th=[26608], 95.00th=[35390], 00:18:45.351 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:18:45.351 | 99.99th=[51643] 00:18:45.351 bw ( KiB/s): min=11120, max=17552, per=23.24%, avg=14336.00, stdev=4548.11, samples=2 00:18:45.351 iops : min= 2780, max= 4388, avg=3584.00, stdev=1137.03, samples=2 00:18:45.351 lat (usec) : 1000=0.01% 00:18:45.351 lat (msec) : 10=3.30%, 20=70.42%, 50=25.70%, 100=0.57% 00:18:45.351 cpu : usr=4.30%, sys=5.00%, ctx=395, majf=0, minf=1 00:18:45.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:45.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.351 issued rwts: total=3413,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.351 job3: (groupid=0, jobs=1): err= 0: pid=789244: Wed Jul 24 23:58:50 2024 00:18:45.351 read: IOPS=3955, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1002msec) 00:18:45.351 slat (usec): min=2, max=16123, avg=116.89, stdev=860.30 00:18:45.351 clat (usec): min=926, max=50710, avg=16429.42, stdev=6045.66 00:18:45.351 lat (usec): min=2738, max=50714, avg=16546.31, stdev=6094.64 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 4293], 5.00th=[ 6980], 10.00th=[10290], 20.00th=[12125], 00:18:45.351 | 30.00th=[12518], 40.00th=[13566], 50.00th=[15664], 60.00th=[16909], 00:18:45.351 | 70.00th=[19268], 80.00th=[21365], 90.00th=[25297], 95.00th=[26870], 00:18:45.351 | 99.00th=[34866], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:18:45.351 | 99.99th=[50594] 00:18:45.351 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:45.351 slat (usec): min=3, max=16905, avg=108.83, stdev=822.61 00:18:45.351 clat (usec): min=1449, max=35731, avg=14839.36, stdev=5720.62 00:18:45.351 lat (usec): min=1462, max=35737, avg=14948.20, stdev=5756.90 00:18:45.351 clat percentiles (usec): 00:18:45.351 | 1.00th=[ 1876], 5.00th=[ 6063], 10.00th=[ 8717], 20.00th=[10552], 00:18:45.351 | 30.00th=[11863], 40.00th=[13304], 50.00th=[14484], 60.00th=[15139], 00:18:45.351 | 70.00th=[16188], 80.00th=[17171], 90.00th=[23725], 95.00th=[26084], 00:18:45.351 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:18:45.351 | 99.99th=[35914] 00:18:45.351 bw ( KiB/s): min=16384, max=16384, per=26.56%, avg=16384.00, stdev= 0.00, samples=2 00:18:45.351 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:45.351 lat (usec) : 1000=0.01% 00:18:45.352 lat (msec) : 2=0.72%, 4=0.72%, 10=9.77%, 20=67.07%, 50=21.70% 00:18:45.352 lat (msec) : 100=0.01% 00:18:45.352 cpu : usr=3.00%, sys=4.90%, ctx=267, majf=0, minf=1 00:18:45.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:45.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.352 issued rwts: total=3963,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.352 00:18:45.352 Run status group 0 (all jobs): 00:18:45.352 READ: bw=57.0MiB/s (59.8MB/s), 12.0MiB/s-16.4MiB/s (12.6MB/s-17.2MB/s), io=57.3MiB (60.1MB), run=1002-1004msec 00:18:45.352 WRITE: bw=60.2MiB/s (63.2MB/s), 12.5MiB/s-17.9MiB/s (13.1MB/s-18.8MB/s), io=60.5MiB (63.4MB), run=1002-1004msec 00:18:45.352 00:18:45.352 Disk stats (read/write): 00:18:45.352 nvme0n1: ios=2603/2615, merge=0/0, ticks=27533/17397, in_queue=44930, util=97.90% 00:18:45.352 nvme0n2: ios=3628/3764, merge=0/0, ticks=45124/30344, in_queue=75468, util=98.48% 00:18:45.352 nvme0n3: ios=3041/3079, merge=0/0, ticks=19546/16487, in_queue=36033, util=88.95% 00:18:45.352 nvme0n4: ios=3219/3584, merge=0/0, ticks=39922/33783, in_queue=73705, util=98.00% 00:18:45.352 23:58:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:45.352 [global] 00:18:45.352 thread=1 00:18:45.352 invalidate=1 00:18:45.352 rw=randwrite 00:18:45.352 time_based=1 00:18:45.352 runtime=1 00:18:45.352 ioengine=libaio 00:18:45.352 direct=1 00:18:45.352 bs=4096 00:18:45.352 iodepth=128 00:18:45.352 norandommap=0 00:18:45.352 numjobs=1 00:18:45.352 00:18:45.352 verify_dump=1 00:18:45.352 verify_backlog=512 00:18:45.352 verify_state_save=0 00:18:45.352 do_verify=1 00:18:45.352 verify=crc32c-intel 00:18:45.352 [job0] 00:18:45.352 filename=/dev/nvme0n1 00:18:45.352 [job1] 00:18:45.352 filename=/dev/nvme0n2 00:18:45.352 [job2] 00:18:45.352 filename=/dev/nvme0n3 00:18:45.352 [job3] 00:18:45.352 filename=/dev/nvme0n4 00:18:45.352 Could not set queue depth (nvme0n1) 00:18:45.352 Could not set queue depth (nvme0n2) 00:18:45.352 Could not set queue depth (nvme0n3) 00:18:45.352 Could not set queue depth (nvme0n4) 00:18:45.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.610 fio-3.35 00:18:45.610 Starting 4 threads 00:18:46.984 00:18:46.984 job0: (groupid=0, jobs=1): err= 0: pid=789592: Wed Jul 24 23:58:52 2024 00:18:46.984 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:46.984 slat (usec): min=2, max=13464, avg=104.76, stdev=721.75 00:18:46.984 clat (usec): min=5344, max=64518, avg=13302.92, stdev=7596.39 00:18:46.984 lat (usec): min=5352, max=64532, avg=13407.68, stdev=7658.88 00:18:46.984 clat percentiles (usec): 00:18:46.984 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8094], 20.00th=[ 8717], 00:18:46.984 | 30.00th=[ 9110], 40.00th=[10814], 50.00th=[11863], 60.00th=[12518], 00:18:46.984 | 70.00th=[13960], 80.00th=[15270], 90.00th=[17433], 95.00th=[25822], 00:18:46.984 | 99.00th=[49021], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:18:46.984 | 99.99th=[64750] 00:18:46.984 write: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec); 0 zone resets 00:18:46.984 slat (usec): min=3, max=15239, avg=94.03, stdev=711.02 00:18:46.984 clat (usec): min=2211, max=40538, avg=13021.56, stdev=6980.57 00:18:46.984 lat (usec): min=2219, max=40555, avg=13115.59, stdev=7036.96 00:18:46.984 clat percentiles (usec): 00:18:46.984 | 1.00th=[ 2606], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7177], 00:18:46.984 | 30.00th=[ 8029], 40.00th=[10683], 50.00th=[11600], 60.00th=[12256], 00:18:46.984 | 70.00th=[13304], 80.00th=[17171], 90.00th=[24773], 95.00th=[27395], 00:18:46.984 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[38536], 00:18:46.984 | 99.99th=[40633] 00:18:46.984 bw ( KiB/s): min=18344, max=20944, per=32.44%, avg=19644.00, stdev=1838.48, samples=2 00:18:46.984 iops : min= 4586, max= 5236, avg=4911.00, stdev=459.62, samples=2 00:18:46.984 lat (msec) : 4=0.86%, 10=34.59%, 20=52.01%, 50=12.08%, 100=0.46% 00:18:46.984 cpu : usr=4.59%, sys=5.79%, ctx=411, majf=0, minf=1 00:18:46.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:46.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.984 issued rwts: total=4608,5038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.984 job1: (groupid=0, jobs=1): err= 0: pid=789595: Wed Jul 24 23:58:52 2024 00:18:46.985 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:18:46.985 slat (usec): min=2, max=21505, avg=147.93, stdev=1088.76 00:18:46.985 clat (usec): min=7258, max=70518, avg=19609.60, stdev=10369.03 00:18:46.985 lat (usec): min=7291, max=70526, avg=19757.54, stdev=10448.56 00:18:46.985 clat percentiles (usec): 00:18:46.985 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[12125], 00:18:46.985 | 30.00th=[12911], 40.00th=[15270], 50.00th=[17695], 60.00th=[18744], 00:18:46.985 | 70.00th=[21365], 80.00th=[23200], 90.00th=[31589], 95.00th=[42206], 00:18:46.985 | 99.00th=[61080], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:18:46.985 | 99.99th=[70779] 00:18:46.985 write: IOPS=2841, BW=11.1MiB/s (11.6MB/s)(11.3MiB/1017msec); 0 zone resets 00:18:46.985 slat (usec): min=2, max=46001, avg=201.97, stdev=1621.69 00:18:46.985 clat (usec): min=1480, max=195845, avg=27282.79, stdev=31707.71 00:18:46.985 lat (usec): min=1498, max=195870, avg=27484.76, stdev=31885.18 00:18:46.985 clat percentiles (msec): 00:18:46.985 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:18:46.985 | 30.00th=[ 11], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 18], 00:18:46.985 | 70.00th=[ 23], 80.00th=[ 40], 90.00th=[ 55], 95.00th=[ 103], 00:18:46.985 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 197], 00:18:46.985 | 99.99th=[ 197] 00:18:46.985 bw ( KiB/s): min= 9808, max=12288, per=18.25%, avg=11048.00, stdev=1753.62, samples=2 00:18:46.985 iops : min= 2452, max= 3072, avg=2762.00, stdev=438.41, samples=2 00:18:46.985 lat (msec) : 2=0.15%, 4=0.50%, 10=11.76%, 20=52.81%, 50=25.47% 00:18:46.985 lat (msec) : 100=6.55%, 250=2.77% 00:18:46.985 cpu : usr=2.26%, sys=3.35%, ctx=262, majf=0, minf=1 00:18:46.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:46.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.985 issued rwts: total=2560,2890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.985 job2: (groupid=0, jobs=1): err= 0: pid=789596: Wed Jul 24 23:58:52 2024 00:18:46.985 read: IOPS=2078, BW=8315KiB/s (8515kB/s)(8448KiB/1016msec) 00:18:46.985 slat (usec): min=3, max=46611, avg=173.51, stdev=1393.97 00:18:46.985 clat (usec): min=3042, max=62789, avg=21758.79, stdev=10708.25 00:18:46.985 lat (usec): min=11541, max=62796, avg=21932.30, stdev=10794.23 00:18:46.985 clat percentiles (usec): 00:18:46.985 | 1.00th=[11600], 5.00th=[12387], 10.00th=[13435], 20.00th=[13960], 00:18:46.985 | 30.00th=[14615], 40.00th=[15401], 50.00th=[16188], 60.00th=[17957], 00:18:46.985 | 70.00th=[27132], 80.00th=[32113], 90.00th=[35390], 95.00th=[35914], 00:18:46.985 | 99.00th=[62653], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:18:46.985 | 99.99th=[62653] 00:18:46.985 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec); 0 zone resets 00:18:46.985 slat (usec): min=4, max=49552, avg=243.03, stdev=1881.83 00:18:46.985 clat (msec): min=7, max=119, avg=31.86, stdev=24.42 00:18:46.985 lat (msec): min=7, max=119, avg=32.10, stdev=24.61 00:18:46.985 clat percentiles (msec): 00:18:46.985 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 14], 00:18:46.985 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 24], 00:18:46.985 | 70.00th=[ 31], 80.00th=[ 47], 90.00th=[ 73], 95.00th=[ 85], 00:18:46.985 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:18:46.985 | 99.99th=[ 121] 00:18:46.985 bw ( KiB/s): min= 7680, max=12288, per=16.49%, avg=9984.00, stdev=3258.35, samples=2 00:18:46.985 iops : min= 1920, max= 3072, avg=2496.00, stdev=814.59, samples=2 00:18:46.985 lat (msec) : 4=0.02%, 10=0.17%, 20=54.32%, 50=33.58%, 100=10.66% 00:18:46.985 lat (msec) : 250=1.24% 00:18:46.985 cpu : usr=3.05%, sys=3.94%, ctx=180, majf=0, minf=1 00:18:46.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:46.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.985 issued rwts: total=2112,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.985 job3: (groupid=0, jobs=1): err= 0: pid=789597: Wed Jul 24 23:58:52 2024 00:18:46.985 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:18:46.985 slat (usec): min=2, max=29943, avg=103.94, stdev=945.47 00:18:46.985 clat (usec): min=2146, max=63993, avg=15003.94, stdev=8543.51 00:18:46.985 lat (usec): min=2156, max=64028, avg=15107.88, stdev=8614.95 00:18:46.985 clat percentiles (usec): 00:18:46.985 | 1.00th=[ 3359], 5.00th=[ 5014], 10.00th=[ 7242], 20.00th=[10945], 00:18:46.985 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13698], 00:18:46.985 | 70.00th=[15139], 80.00th=[18220], 90.00th=[24249], 95.00th=[26608], 00:18:46.985 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:18:46.985 | 99.99th=[64226] 00:18:46.985 write: IOPS=4839, BW=18.9MiB/s (19.8MB/s)(19.2MiB/1014msec); 0 zone resets 00:18:46.985 slat (usec): min=3, max=42549, avg=85.58, stdev=928.82 00:18:46.985 clat (usec): min=1001, max=79211, avg=13732.82, stdev=10549.00 00:18:46.985 lat (usec): min=1034, max=79227, avg=13818.40, stdev=10586.79 00:18:46.985 clat percentiles (usec): 00:18:46.985 | 1.00th=[ 2057], 5.00th=[ 2933], 10.00th=[ 4113], 20.00th=[ 6128], 00:18:46.985 | 30.00th=[ 8455], 40.00th=[11338], 50.00th=[12518], 60.00th=[13173], 00:18:46.985 | 70.00th=[14615], 80.00th=[16057], 90.00th=[24249], 95.00th=[31851], 00:18:46.985 | 99.00th=[47449], 99.50th=[70779], 99.90th=[78119], 99.95th=[78119], 00:18:46.985 | 99.99th=[79168] 00:18:46.985 bw ( KiB/s): min=17752, max=20480, per=31.57%, avg=19116.00, stdev=1928.99, samples=2 00:18:46.985 iops : min= 4438, max= 5120, avg=4779.00, stdev=482.25, samples=2 00:18:46.985 lat (msec) : 2=0.40%, 4=5.15%, 10=20.43%, 20=59.00%, 50=13.37% 00:18:46.985 lat (msec) : 100=1.64% 00:18:46.985 cpu : usr=3.16%, sys=4.44%, ctx=399, majf=0, minf=1 00:18:46.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:46.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.985 issued rwts: total=4096,4907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.985 00:18:46.985 Run status group 0 (all jobs): 00:18:46.985 READ: bw=51.4MiB/s (53.9MB/s), 8315KiB/s-17.9MiB/s (8515kB/s-18.8MB/s), io=52.2MiB (54.8MB), run=1003-1017msec 00:18:46.985 WRITE: bw=59.1MiB/s (62.0MB/s), 9.84MiB/s-19.6MiB/s (10.3MB/s-20.6MB/s), io=60.1MiB (63.1MB), run=1003-1017msec 00:18:46.985 00:18:46.985 Disk stats (read/write): 00:18:46.985 nvme0n1: ios=3634/3587, merge=0/0, ticks=31107/32107, in_queue=63214, util=86.87% 00:18:46.985 nvme0n2: ios=2585/2639, merge=0/0, ticks=32208/39136, in_queue=71344, util=87.09% 00:18:46.985 nvme0n3: ios=2069/2363, merge=0/0, ticks=21422/30893, in_queue=52315, util=98.01% 00:18:46.985 nvme0n4: ios=3784/4454, merge=0/0, ticks=46347/53205, in_queue=99552, util=89.67% 00:18:46.986 23:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:46.986 23:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=789733 00:18:46.986 23:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:46.986 23:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:46.986 [global] 00:18:46.986 thread=1 00:18:46.986 invalidate=1 00:18:46.986 rw=read 00:18:46.986 time_based=1 00:18:46.986 runtime=10 00:18:46.986 ioengine=libaio 00:18:46.986 direct=1 00:18:46.986 bs=4096 00:18:46.986 iodepth=1 00:18:46.986 norandommap=1 00:18:46.986 numjobs=1 00:18:46.986 00:18:46.986 [job0] 00:18:46.986 filename=/dev/nvme0n1 00:18:46.986 [job1] 00:18:46.986 filename=/dev/nvme0n2 00:18:46.986 [job2] 00:18:46.986 filename=/dev/nvme0n3 00:18:46.986 [job3] 00:18:46.986 filename=/dev/nvme0n4 00:18:46.986 Could not set queue depth (nvme0n1) 00:18:46.986 Could not set queue depth (nvme0n2) 00:18:46.986 Could not set queue depth (nvme0n3) 00:18:46.986 Could not set queue depth (nvme0n4) 00:18:46.986 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.986 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.986 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.986 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.986 fio-3.35 00:18:46.986 Starting 4 threads 00:18:50.263 23:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:50.263 23:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:50.263 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=29925376, buflen=4096 00:18:50.263 fio: pid=789826, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.263 23:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.263 23:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:50.263 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1593344, buflen=4096 00:18:50.263 fio: pid=789825, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.521 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.521 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:50.521 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=335872, buflen=4096 00:18:50.521 fio: pid=789823, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.779 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6189056, buflen=4096 00:18:50.779 fio: pid=789824, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.779 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.780 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:50.780 00:18:50.780 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=789823: Wed Jul 24 23:58:56 2024 00:18:50.780 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(328KiB/3420msec) 00:18:50.780 slat (usec): min=11, max=17803, avg=434.81, stdev=2670.02 00:18:50.780 clat (usec): min=40899, max=41137, avg=40981.50, stdev=30.91 00:18:50.780 lat (usec): min=40931, max=58940, avg=41216.35, stdev=1981.63 00:18:50.780 clat percentiles (usec): 00:18:50.780 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:50.780 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:50.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:50.780 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:50.780 | 99.99th=[41157] 00:18:50.780 bw ( KiB/s): min= 96, max= 104, per=0.95%, avg=97.33, stdev= 3.27, samples=6 00:18:50.780 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:18:50.780 lat (msec) : 50=98.80% 00:18:50.780 cpu : usr=0.00%, sys=0.06%, ctx=85, majf=0, minf=1 00:18:50.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.780 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=789824: Wed Jul 24 23:58:56 2024 00:18:50.780 read: IOPS=414, BW=1655KiB/s (1695kB/s)(6044KiB/3651msec) 00:18:50.780 slat (usec): min=4, max=15618, avg=46.93, stdev=684.52 00:18:50.780 clat (usec): min=274, max=42034, avg=2358.74, stdev=8780.53 00:18:50.780 lat (usec): min=279, max=42047, avg=2405.70, stdev=8800.89 00:18:50.780 clat percentiles (usec): 00:18:50.780 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 310], 00:18:50.780 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 375], 60.00th=[ 379], 00:18:50.780 | 70.00th=[ 383], 80.00th=[ 420], 90.00th=[ 515], 95.00th=[ 742], 00:18:50.780 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:50.780 | 99.99th=[42206] 00:18:50.780 bw ( KiB/s): min= 96, max= 7752, per=13.64%, avg=1388.57, stdev=2825.96, samples=7 00:18:50.780 iops : min= 24, max= 1938, avg=347.14, stdev=706.49, samples=7 00:18:50.780 lat (usec) : 500=87.63%, 750=7.34%, 1000=0.07% 00:18:50.780 lat (msec) : 50=4.89% 00:18:50.780 cpu : usr=0.22%, sys=0.58%, ctx=1516, majf=0, minf=1 00:18:50.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 issued rwts: total=1512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.780 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=789825: Wed Jul 24 23:58:56 2024 00:18:50.780 read: IOPS=123, BW=493KiB/s (505kB/s)(1556KiB/3156msec) 00:18:50.780 slat (nsec): min=6201, max=55018, avg=19634.97, stdev=9816.83 00:18:50.780 clat (usec): min=335, max=42228, avg=8029.22, stdev=15892.37 00:18:50.780 lat (usec): min=352, max=42238, avg=8048.87, stdev=15891.83 00:18:50.780 clat percentiles (usec): 00:18:50.780 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:18:50.780 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 420], 00:18:50.780 | 70.00th=[ 441], 80.00th=[ 523], 90.00th=[41157], 95.00th=[41157], 00:18:50.780 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:50.780 | 99.99th=[42206] 00:18:50.780 bw ( KiB/s): min= 96, max= 2096, per=5.00%, avg=509.33, stdev=785.61, samples=6 00:18:50.780 iops : min= 24, max= 524, avg=127.33, stdev=196.40, samples=6 00:18:50.780 lat (usec) : 500=78.46%, 750=2.31% 00:18:50.780 lat (msec) : 2=0.26%, 50=18.72% 00:18:50.780 cpu : usr=0.22%, sys=0.19%, ctx=390, majf=0, minf=1 00:18:50.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.780 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=789826: Wed Jul 24 23:58:56 2024 00:18:50.780 read: IOPS=2546, BW=9.95MiB/s (10.4MB/s)(28.5MiB/2869msec) 00:18:50.780 slat (nsec): min=5284, max=62725, avg=10528.16, stdev=5506.39 00:18:50.780 clat (usec): min=321, max=1625, avg=376.20, stdev=59.33 00:18:50.780 lat (usec): min=327, max=1632, avg=386.72, stdev=61.54 00:18:50.780 clat percentiles (usec): 00:18:50.780 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 347], 00:18:50.780 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:18:50.780 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 449], 95.00th=[ 469], 00:18:50.780 | 99.00th=[ 578], 99.50th=[ 668], 99.90th=[ 1106], 99.95th=[ 1156], 00:18:50.780 | 99.99th=[ 1631] 00:18:50.780 bw ( KiB/s): min= 9528, max=10944, per=99.75%, avg=10150.40, stdev=611.03, samples=5 00:18:50.780 iops : min= 2382, max= 2736, avg=2537.60, stdev=152.76, samples=5 00:18:50.780 lat (usec) : 500=97.08%, 750=2.56%, 1000=0.21% 00:18:50.780 lat (msec) : 2=0.14% 00:18:50.780 cpu : usr=2.06%, sys=3.97%, ctx=7307, majf=0, minf=1 00:18:50.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.780 issued rwts: total=7307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.780 00:18:50.780 Run status group 0 (all jobs): 00:18:50.780 READ: bw=9.94MiB/s (10.4MB/s), 95.9KiB/s-9.95MiB/s (98.2kB/s-10.4MB/s), io=36.3MiB (38.0MB), run=2869-3651msec 00:18:50.780 00:18:50.780 Disk stats (read/write): 00:18:50.780 nvme0n1: ios=81/0, merge=0/0, ticks=3321/0, in_queue=3321, util=95.45% 00:18:50.780 nvme0n2: ios=1407/0, merge=0/0, ticks=3518/0, in_queue=3518, util=95.23% 00:18:50.780 nvme0n3: ios=388/0, merge=0/0, ticks=3081/0, in_queue=3081, util=96.79% 00:18:50.780 nvme0n4: ios=7306/0, merge=0/0, ticks=2622/0, in_queue=2622, util=96.71% 00:18:51.038 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.038 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:51.296 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.296 23:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:51.554 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.554 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:51.812 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.812 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:52.069 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:52.069 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 789733 00:18:52.069 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:52.069 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:52.327 nvmf hotplug test: fio failed as expected 00:18:52.327 23:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.586 rmmod nvme_tcp 00:18:52.586 rmmod nvme_fabrics 00:18:52.586 rmmod nvme_keyring 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 787714 ']' 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 787714 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 787714 ']' 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 787714 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 787714 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 787714' 00:18:52.586 killing process with pid 787714 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 787714 00:18:52.586 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 787714 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.845 23:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.748 23:59:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.748 00:18:54.748 real 0m23.153s 00:18:54.748 user 1m20.631s 00:18:54.748 sys 0m6.154s 00:18:54.748 23:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:54.748 23:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.748 ************************************ 00:18:54.748 END TEST nvmf_fio_target 00:18:54.748 ************************************ 00:18:54.748 23:59:00 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:54.748 23:59:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:54.748 23:59:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:54.748 23:59:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.006 ************************************ 00:18:55.006 START TEST nvmf_bdevio 00:18:55.006 ************************************ 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:55.006 * Looking for test storage... 00:18:55.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.006 23:59:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.007 23:59:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:56.907 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:56.907 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:56.907 Found net devices under 0000:09:00.0: cvl_0_0 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.907 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:56.907 Found net devices under 0000:09:00.1: cvl_0_1 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.908 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:18:57.166 00:18:57.166 --- 10.0.0.2 ping statistics --- 00:18:57.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.166 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:57.166 00:18:57.166 --- 10.0.0.1 ping statistics --- 00:18:57.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.166 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=792440 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 792440 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 792440 ']' 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:57.166 23:59:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.166 [2024-07-24 23:59:02.794558] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:57.166 [2024-07-24 23:59:02.794645] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.166 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.166 [2024-07-24 23:59:02.859244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.424 [2024-07-24 23:59:02.951228] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.424 [2024-07-24 23:59:02.951298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.424 [2024-07-24 23:59:02.951311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.424 [2024-07-24 23:59:02.951322] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.424 [2024-07-24 23:59:02.951332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.424 [2024-07-24 23:59:02.951456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:57.424 [2024-07-24 23:59:02.951521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:57.424 [2024-07-24 23:59:02.951573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:57.424 [2024-07-24 23:59:02.951575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.424 [2024-07-24 23:59:03.107696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.424 Malloc0 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.424 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 [2024-07-24 23:59:03.159345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:57.682 { 00:18:57.682 "params": { 00:18:57.682 "name": "Nvme$subsystem", 00:18:57.682 "trtype": "$TEST_TRANSPORT", 00:18:57.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.682 "adrfam": "ipv4", 00:18:57.682 "trsvcid": "$NVMF_PORT", 00:18:57.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.682 "hdgst": ${hdgst:-false}, 00:18:57.682 "ddgst": ${ddgst:-false} 00:18:57.682 }, 00:18:57.682 "method": "bdev_nvme_attach_controller" 00:18:57.682 } 00:18:57.682 EOF 00:18:57.682 )") 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:57.682 23:59:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:57.682 "params": { 00:18:57.682 "name": "Nvme1", 00:18:57.682 "trtype": "tcp", 00:18:57.682 "traddr": "10.0.0.2", 00:18:57.682 "adrfam": "ipv4", 00:18:57.682 "trsvcid": "4420", 00:18:57.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.682 "hdgst": false, 00:18:57.682 "ddgst": false 00:18:57.682 }, 00:18:57.682 "method": "bdev_nvme_attach_controller" 00:18:57.682 }' 00:18:57.682 [2024-07-24 23:59:03.203042] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:57.682 [2024-07-24 23:59:03.203165] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792479 ] 00:18:57.682 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.682 [2024-07-24 23:59:03.263165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:57.682 [2024-07-24 23:59:03.353252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.682 [2024-07-24 23:59:03.353301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.682 [2024-07-24 23:59:03.353304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.939 I/O targets: 00:18:57.939 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:57.939 00:18:57.939 00:18:57.939 CUnit - A unit testing framework for C - Version 2.1-3 00:18:57.939 http://cunit.sourceforge.net/ 00:18:57.939 00:18:57.939 00:18:57.939 Suite: bdevio tests on: Nvme1n1 00:18:57.939 Test: blockdev write read block ...passed 00:18:57.939 Test: blockdev write zeroes read block ...passed 00:18:57.939 Test: blockdev write zeroes read no split ...passed 00:18:58.196 Test: blockdev write zeroes read split ...passed 00:18:58.196 Test: blockdev write zeroes read split partial ...passed 00:18:58.196 Test: blockdev reset ...[2024-07-24 23:59:03.728069] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.196 [2024-07-24 23:59:03.728182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c1a00 (9): Bad file descriptor 00:18:58.196 [2024-07-24 23:59:03.821407] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.196 passed 00:18:58.196 Test: blockdev write read 8 blocks ...passed 00:18:58.196 Test: blockdev write read size > 128k ...passed 00:18:58.196 Test: blockdev write read invalid size ...passed 00:18:58.196 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:58.196 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:58.196 Test: blockdev write read max offset ...passed 00:18:58.454 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:58.454 Test: blockdev writev readv 8 blocks ...passed 00:18:58.454 Test: blockdev writev readv 30 x 1block ...passed 00:18:58.454 Test: blockdev writev readv block ...passed 00:18:58.454 Test: blockdev writev readv size > 128k ...passed 00:18:58.454 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:58.454 Test: blockdev comparev and writev ...[2024-07-24 23:59:04.080241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.080276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.080300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.080318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.080672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.080718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.080734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.081079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.081109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.081133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.081149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.081506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.081529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.081550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.454 [2024-07-24 23:59:04.081565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.454 passed 00:18:58.454 Test: blockdev nvme passthru rw ...passed 00:18:58.454 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:59:04.165430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.454 [2024-07-24 23:59:04.165455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.165639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.454 [2024-07-24 23:59:04.165662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.165845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.454 [2024-07-24 23:59:04.165868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.454 [2024-07-24 23:59:04.166049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.454 [2024-07-24 23:59:04.166078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.454 passed 00:18:58.712 Test: blockdev nvme admin passthru ...passed 00:18:58.712 Test: blockdev copy ...passed 00:18:58.712 00:18:58.712 Run Summary: Type Total Ran Passed Failed Inactive 00:18:58.712 suites 1 1 n/a 0 0 00:18:58.712 tests 23 23 23 0 0 00:18:58.712 asserts 152 152 152 0 n/a 00:18:58.712 00:18:58.712 Elapsed time = 1.400 seconds 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.712 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.970 rmmod nvme_tcp 00:18:58.970 rmmod nvme_fabrics 00:18:58.970 rmmod nvme_keyring 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 792440 ']' 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 792440 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 792440 ']' 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 792440 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 792440 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 792440' 00:18:58.970 killing process with pid 792440 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 792440 00:18:58.970 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 792440 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.228 23:59:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.128 23:59:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.128 00:19:01.128 real 0m6.325s 00:19:01.128 user 0m10.159s 00:19:01.128 sys 0m2.066s 00:19:01.128 23:59:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:01.128 23:59:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:01.128 ************************************ 00:19:01.128 END TEST nvmf_bdevio 00:19:01.128 ************************************ 00:19:01.128 23:59:06 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:01.128 23:59:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:01.128 23:59:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:01.128 23:59:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.386 ************************************ 00:19:01.386 START TEST nvmf_auth_target 00:19:01.386 ************************************ 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:01.386 * Looking for test storage... 00:19:01.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.386 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.387 23:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.285 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:03.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:03.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:03.286 Found net devices under 0000:09:00.0: cvl_0_0 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:03.286 Found net devices under 0000:09:00.1: cvl_0_1 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.286 23:59:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.544 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.544 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.544 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:19:03.545 00:19:03.545 --- 10.0.0.2 ping statistics --- 00:19:03.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.545 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:19:03.545 00:19:03.545 --- 10.0.0.1 ping statistics --- 00:19:03.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.545 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=794546 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 794546 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 794546 ']' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:03.545 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=794680 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3b72f167c2b44d94786104a2bd89f81e013e3360eefd2ce8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RK8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3b72f167c2b44d94786104a2bd89f81e013e3360eefd2ce8 0 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3b72f167c2b44d94786104a2bd89f81e013e3360eefd2ce8 0 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3b72f167c2b44d94786104a2bd89f81e013e3360eefd2ce8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RK8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RK8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.RK8 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d276edb6a62516ee9db2d89257d52c374de3990a2f0263ffcfeb08f5b3b4497 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Rzk 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d276edb6a62516ee9db2d89257d52c374de3990a2f0263ffcfeb08f5b3b4497 3 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d276edb6a62516ee9db2d89257d52c374de3990a2f0263ffcfeb08f5b3b4497 3 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d276edb6a62516ee9db2d89257d52c374de3990a2f0263ffcfeb08f5b3b4497 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Rzk 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Rzk 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Rzk 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:03.803 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=897b04b53f93383a884c0c330e488ece 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cFX 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 897b04b53f93383a884c0c330e488ece 1 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 897b04b53f93383a884c0c330e488ece 1 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=897b04b53f93383a884c0c330e488ece 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:03.804 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cFX 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cFX 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.cFX 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7eda923d5984555c90684cbdfe3947af3f0e2d8f540153b7 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jt5 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7eda923d5984555c90684cbdfe3947af3f0e2d8f540153b7 2 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7eda923d5984555c90684cbdfe3947af3f0e2d8f540153b7 2 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7eda923d5984555c90684cbdfe3947af3f0e2d8f540153b7 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jt5 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jt5 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.jt5 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.062 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f084d994ed2e3c39ca03753bcc21f5ac9cef59432d2e6242 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NFj 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f084d994ed2e3c39ca03753bcc21f5ac9cef59432d2e6242 2 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f084d994ed2e3c39ca03753bcc21f5ac9cef59432d2e6242 2 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f084d994ed2e3c39ca03753bcc21f5ac9cef59432d2e6242 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NFj 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NFj 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.NFj 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b8b7038c9fbcd08658b4ad987264596 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9EN 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b8b7038c9fbcd08658b4ad987264596 1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b8b7038c9fbcd08658b4ad987264596 1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b8b7038c9fbcd08658b4ad987264596 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9EN 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9EN 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9EN 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19aed2f1904ab5ad55647bf702d31bd4db9c628215bc85757e52990cd86bd7e5 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.diq 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19aed2f1904ab5ad55647bf702d31bd4db9c628215bc85757e52990cd86bd7e5 3 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19aed2f1904ab5ad55647bf702d31bd4db9c628215bc85757e52990cd86bd7e5 3 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19aed2f1904ab5ad55647bf702d31bd4db9c628215bc85757e52990cd86bd7e5 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.diq 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.diq 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.diq 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 794546 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 794546 ']' 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:04.063 23:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 794680 /var/tmp/host.sock 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 794680 ']' 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:04.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:04.321 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RK8 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RK8 00:19:04.604 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RK8 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Rzk ]] 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rzk 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rzk 00:19:04.863 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rzk 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cFX 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cFX 00:19:05.120 23:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cFX 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.jt5 ]] 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jt5 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jt5 00:19:05.378 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jt5 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NFj 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NFj 00:19:05.635 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NFj 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9EN ]] 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9EN 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9EN 00:19:05.893 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9EN 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.diq 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.diq 00:19:06.151 23:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.diq 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.716 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.282 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.282 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.282 { 00:19:07.282 "cntlid": 1, 00:19:07.282 "qid": 0, 00:19:07.282 "state": "enabled", 00:19:07.282 "listen_address": { 00:19:07.282 "trtype": "TCP", 00:19:07.282 "adrfam": "IPv4", 00:19:07.282 "traddr": "10.0.0.2", 00:19:07.282 "trsvcid": "4420" 00:19:07.282 }, 00:19:07.283 "peer_address": { 00:19:07.283 "trtype": "TCP", 00:19:07.283 "adrfam": "IPv4", 00:19:07.283 "traddr": "10.0.0.1", 00:19:07.283 "trsvcid": "39314" 00:19:07.283 }, 00:19:07.283 "auth": { 00:19:07.283 "state": "completed", 00:19:07.283 "digest": "sha256", 00:19:07.283 "dhgroup": "null" 00:19:07.283 } 00:19:07.283 } 00:19:07.283 ]' 00:19:07.283 23:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.541 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.797 23:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.730 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.988 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.245 00:19:09.245 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.245 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.245 23:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.503 { 00:19:09.503 "cntlid": 3, 00:19:09.503 "qid": 0, 00:19:09.503 "state": "enabled", 00:19:09.503 "listen_address": { 00:19:09.503 "trtype": "TCP", 00:19:09.503 "adrfam": "IPv4", 00:19:09.503 "traddr": "10.0.0.2", 00:19:09.503 "trsvcid": "4420" 00:19:09.503 }, 00:19:09.503 "peer_address": { 00:19:09.503 "trtype": "TCP", 00:19:09.503 "adrfam": "IPv4", 00:19:09.503 "traddr": "10.0.0.1", 00:19:09.503 "trsvcid": "39338" 00:19:09.503 }, 00:19:09.503 "auth": { 00:19:09.503 "state": "completed", 00:19:09.503 "digest": "sha256", 00:19:09.503 "dhgroup": "null" 00:19:09.503 } 00:19:09.503 } 00:19:09.503 ]' 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.503 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.761 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.761 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.761 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.761 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.761 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.019 23:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.954 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.212 23:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.469 00:19:11.469 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.469 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.469 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.727 { 00:19:11.727 "cntlid": 5, 00:19:11.727 "qid": 0, 00:19:11.727 "state": "enabled", 00:19:11.727 "listen_address": { 00:19:11.727 "trtype": "TCP", 00:19:11.727 "adrfam": "IPv4", 00:19:11.727 "traddr": "10.0.0.2", 00:19:11.727 "trsvcid": "4420" 00:19:11.727 }, 00:19:11.727 "peer_address": { 00:19:11.727 "trtype": "TCP", 00:19:11.727 "adrfam": "IPv4", 00:19:11.727 "traddr": "10.0.0.1", 00:19:11.727 "trsvcid": "51030" 00:19:11.727 }, 00:19:11.727 "auth": { 00:19:11.727 "state": "completed", 00:19:11.727 "digest": "sha256", 00:19:11.727 "dhgroup": "null" 00:19:11.727 } 00:19:11.727 } 00:19:11.727 ]' 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.727 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.984 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.984 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.984 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.984 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.984 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.242 23:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:19:13.174 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.174 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.174 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.174 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.174 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.175 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.175 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.175 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.433 23:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.690 00:19:13.690 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.690 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.690 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.947 { 00:19:13.947 "cntlid": 7, 00:19:13.947 "qid": 0, 00:19:13.947 "state": "enabled", 00:19:13.947 "listen_address": { 00:19:13.947 "trtype": "TCP", 00:19:13.947 "adrfam": "IPv4", 00:19:13.947 "traddr": "10.0.0.2", 00:19:13.947 "trsvcid": "4420" 00:19:13.947 }, 00:19:13.947 "peer_address": { 00:19:13.947 "trtype": "TCP", 00:19:13.947 "adrfam": "IPv4", 00:19:13.947 "traddr": "10.0.0.1", 00:19:13.947 "trsvcid": "51064" 00:19:13.947 }, 00:19:13.947 "auth": { 00:19:13.947 "state": "completed", 00:19:13.947 "digest": "sha256", 00:19:13.947 "dhgroup": "null" 00:19:13.947 } 00:19:13.947 } 00:19:13.947 ]' 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.947 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.205 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.205 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.205 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.463 23:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.394 23:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.651 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.909 00:19:15.909 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.909 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.909 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.167 { 00:19:16.167 "cntlid": 9, 00:19:16.167 "qid": 0, 00:19:16.167 "state": "enabled", 00:19:16.167 "listen_address": { 00:19:16.167 "trtype": "TCP", 00:19:16.167 "adrfam": "IPv4", 00:19:16.167 "traddr": "10.0.0.2", 00:19:16.167 "trsvcid": "4420" 00:19:16.167 }, 00:19:16.167 "peer_address": { 00:19:16.167 "trtype": "TCP", 00:19:16.167 "adrfam": "IPv4", 00:19:16.167 "traddr": "10.0.0.1", 00:19:16.167 "trsvcid": "51086" 00:19:16.167 }, 00:19:16.167 "auth": { 00:19:16.167 "state": "completed", 00:19:16.167 "digest": "sha256", 00:19:16.167 "dhgroup": "ffdhe2048" 00:19:16.167 } 00:19:16.167 } 00:19:16.167 ]' 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.167 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.425 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.425 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.425 23:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.683 23:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.616 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.872 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.129 00:19:18.129 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.129 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.129 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.387 { 00:19:18.387 "cntlid": 11, 00:19:18.387 "qid": 0, 00:19:18.387 "state": "enabled", 00:19:18.387 "listen_address": { 00:19:18.387 "trtype": "TCP", 00:19:18.387 "adrfam": "IPv4", 00:19:18.387 "traddr": "10.0.0.2", 00:19:18.387 "trsvcid": "4420" 00:19:18.387 }, 00:19:18.387 "peer_address": { 00:19:18.387 "trtype": "TCP", 00:19:18.387 "adrfam": "IPv4", 00:19:18.387 "traddr": "10.0.0.1", 00:19:18.387 "trsvcid": "51116" 00:19:18.387 }, 00:19:18.387 "auth": { 00:19:18.387 "state": "completed", 00:19:18.387 "digest": "sha256", 00:19:18.387 "dhgroup": "ffdhe2048" 00:19:18.387 } 00:19:18.387 } 00:19:18.387 ]' 00:19:18.387 23:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.387 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.645 23:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.577 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.578 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.578 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.835 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.401 00:19:20.401 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.401 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.401 23:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.401 { 00:19:20.401 "cntlid": 13, 00:19:20.401 "qid": 0, 00:19:20.401 "state": "enabled", 00:19:20.401 "listen_address": { 00:19:20.401 "trtype": "TCP", 00:19:20.401 "adrfam": "IPv4", 00:19:20.401 "traddr": "10.0.0.2", 00:19:20.401 "trsvcid": "4420" 00:19:20.401 }, 00:19:20.401 "peer_address": { 00:19:20.401 "trtype": "TCP", 00:19:20.401 "adrfam": "IPv4", 00:19:20.401 "traddr": "10.0.0.1", 00:19:20.401 "trsvcid": "55376" 00:19:20.401 }, 00:19:20.401 "auth": { 00:19:20.401 "state": "completed", 00:19:20.401 "digest": "sha256", 00:19:20.401 "dhgroup": "ffdhe2048" 00:19:20.401 } 00:19:20.401 } 00:19:20.401 ]' 00:19:20.401 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.659 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.916 23:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.853 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.160 23:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.419 00:19:22.419 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.419 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.419 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.676 { 00:19:22.676 "cntlid": 15, 00:19:22.676 "qid": 0, 00:19:22.676 "state": "enabled", 00:19:22.676 "listen_address": { 00:19:22.676 "trtype": "TCP", 00:19:22.676 "adrfam": "IPv4", 00:19:22.676 "traddr": "10.0.0.2", 00:19:22.676 "trsvcid": "4420" 00:19:22.676 }, 00:19:22.676 "peer_address": { 00:19:22.676 "trtype": "TCP", 00:19:22.676 "adrfam": "IPv4", 00:19:22.676 "traddr": "10.0.0.1", 00:19:22.676 "trsvcid": "55402" 00:19:22.676 }, 00:19:22.676 "auth": { 00:19:22.676 "state": "completed", 00:19:22.676 "digest": "sha256", 00:19:22.676 "dhgroup": "ffdhe2048" 00:19:22.676 } 00:19:22.676 } 00:19:22.676 ]' 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.676 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.933 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.933 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.933 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.191 23:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.123 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.380 23:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.637 00:19:24.637 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.637 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.637 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.895 { 00:19:24.895 "cntlid": 17, 00:19:24.895 "qid": 0, 00:19:24.895 "state": "enabled", 00:19:24.895 "listen_address": { 00:19:24.895 "trtype": "TCP", 00:19:24.895 "adrfam": "IPv4", 00:19:24.895 "traddr": "10.0.0.2", 00:19:24.895 "trsvcid": "4420" 00:19:24.895 }, 00:19:24.895 "peer_address": { 00:19:24.895 "trtype": "TCP", 00:19:24.895 "adrfam": "IPv4", 00:19:24.895 "traddr": "10.0.0.1", 00:19:24.895 "trsvcid": "55426" 00:19:24.895 }, 00:19:24.895 "auth": { 00:19:24.895 "state": "completed", 00:19:24.895 "digest": "sha256", 00:19:24.895 "dhgroup": "ffdhe3072" 00:19:24.895 } 00:19:24.895 } 00:19:24.895 ]' 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.895 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.152 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.152 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.152 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.410 23:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.342 23:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.599 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.856 00:19:26.856 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.856 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.856 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.113 { 00:19:27.113 "cntlid": 19, 00:19:27.113 "qid": 0, 00:19:27.113 "state": "enabled", 00:19:27.113 "listen_address": { 00:19:27.113 "trtype": "TCP", 00:19:27.113 "adrfam": "IPv4", 00:19:27.113 "traddr": "10.0.0.2", 00:19:27.113 "trsvcid": "4420" 00:19:27.113 }, 00:19:27.113 "peer_address": { 00:19:27.113 "trtype": "TCP", 00:19:27.113 "adrfam": "IPv4", 00:19:27.113 "traddr": "10.0.0.1", 00:19:27.113 "trsvcid": "55462" 00:19:27.113 }, 00:19:27.113 "auth": { 00:19:27.113 "state": "completed", 00:19:27.113 "digest": "sha256", 00:19:27.113 "dhgroup": "ffdhe3072" 00:19:27.113 } 00:19:27.113 } 00:19:27.113 ]' 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.113 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.371 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.371 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.371 23:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.629 23:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.560 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.817 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:28.817 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.818 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.076 00:19:29.076 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.076 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.076 23:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.333 { 00:19:29.333 "cntlid": 21, 00:19:29.333 "qid": 0, 00:19:29.333 "state": "enabled", 00:19:29.333 "listen_address": { 00:19:29.333 "trtype": "TCP", 00:19:29.333 "adrfam": "IPv4", 00:19:29.333 "traddr": "10.0.0.2", 00:19:29.333 "trsvcid": "4420" 00:19:29.333 }, 00:19:29.333 "peer_address": { 00:19:29.333 "trtype": "TCP", 00:19:29.333 "adrfam": "IPv4", 00:19:29.333 "traddr": "10.0.0.1", 00:19:29.333 "trsvcid": "55488" 00:19:29.333 }, 00:19:29.333 "auth": { 00:19:29.333 "state": "completed", 00:19:29.333 "digest": "sha256", 00:19:29.333 "dhgroup": "ffdhe3072" 00:19:29.333 } 00:19:29.333 } 00:19:29.333 ]' 00:19:29.333 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.591 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.848 23:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.780 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.038 23:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.604 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.604 23:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.862 { 00:19:31.862 "cntlid": 23, 00:19:31.862 "qid": 0, 00:19:31.862 "state": "enabled", 00:19:31.862 "listen_address": { 00:19:31.862 "trtype": "TCP", 00:19:31.862 "adrfam": "IPv4", 00:19:31.862 "traddr": "10.0.0.2", 00:19:31.862 "trsvcid": "4420" 00:19:31.862 }, 00:19:31.862 "peer_address": { 00:19:31.862 "trtype": "TCP", 00:19:31.862 "adrfam": "IPv4", 00:19:31.862 "traddr": "10.0.0.1", 00:19:31.862 "trsvcid": "45992" 00:19:31.862 }, 00:19:31.862 "auth": { 00:19:31.862 "state": "completed", 00:19:31.862 "digest": "sha256", 00:19:31.862 "dhgroup": "ffdhe3072" 00:19:31.862 } 00:19:31.862 } 00:19:31.862 ]' 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.862 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.119 23:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:19:33.053 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.054 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.311 23:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.569 00:19:33.569 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.569 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.569 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.827 { 00:19:33.827 "cntlid": 25, 00:19:33.827 "qid": 0, 00:19:33.827 "state": "enabled", 00:19:33.827 "listen_address": { 00:19:33.827 "trtype": "TCP", 00:19:33.827 "adrfam": "IPv4", 00:19:33.827 "traddr": "10.0.0.2", 00:19:33.827 "trsvcid": "4420" 00:19:33.827 }, 00:19:33.827 "peer_address": { 00:19:33.827 "trtype": "TCP", 00:19:33.827 "adrfam": "IPv4", 00:19:33.827 "traddr": "10.0.0.1", 00:19:33.827 "trsvcid": "46030" 00:19:33.827 }, 00:19:33.827 "auth": { 00:19:33.827 "state": "completed", 00:19:33.827 "digest": "sha256", 00:19:33.827 "dhgroup": "ffdhe4096" 00:19:33.827 } 00:19:33.827 } 00:19:33.827 ]' 00:19:33.827 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.084 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.342 23:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.277 23:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.536 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.103 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.103 { 00:19:36.103 "cntlid": 27, 00:19:36.103 "qid": 0, 00:19:36.103 "state": "enabled", 00:19:36.103 "listen_address": { 00:19:36.103 "trtype": "TCP", 00:19:36.103 "adrfam": "IPv4", 00:19:36.103 "traddr": "10.0.0.2", 00:19:36.103 "trsvcid": "4420" 00:19:36.103 }, 00:19:36.103 "peer_address": { 00:19:36.103 "trtype": "TCP", 00:19:36.103 "adrfam": "IPv4", 00:19:36.103 "traddr": "10.0.0.1", 00:19:36.103 "trsvcid": "46066" 00:19:36.103 }, 00:19:36.103 "auth": { 00:19:36.103 "state": "completed", 00:19:36.103 "digest": "sha256", 00:19:36.103 "dhgroup": "ffdhe4096" 00:19:36.103 } 00:19:36.103 } 00:19:36.103 ]' 00:19:36.103 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.360 23:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.618 23:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.550 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.807 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.808 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.371 00:19:38.371 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.371 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.371 23:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.628 { 00:19:38.628 "cntlid": 29, 00:19:38.628 "qid": 0, 00:19:38.628 "state": "enabled", 00:19:38.628 "listen_address": { 00:19:38.628 "trtype": "TCP", 00:19:38.628 "adrfam": "IPv4", 00:19:38.628 "traddr": "10.0.0.2", 00:19:38.628 "trsvcid": "4420" 00:19:38.628 }, 00:19:38.628 "peer_address": { 00:19:38.628 "trtype": "TCP", 00:19:38.628 "adrfam": "IPv4", 00:19:38.628 "traddr": "10.0.0.1", 00:19:38.628 "trsvcid": "46080" 00:19:38.628 }, 00:19:38.628 "auth": { 00:19:38.628 "state": "completed", 00:19:38.628 "digest": "sha256", 00:19:38.628 "dhgroup": "ffdhe4096" 00:19:38.628 } 00:19:38.628 } 00:19:38.628 ]' 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.628 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.885 23:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.844 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.102 23:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.667 00:19:40.667 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.667 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.667 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.925 { 00:19:40.925 "cntlid": 31, 00:19:40.925 "qid": 0, 00:19:40.925 "state": "enabled", 00:19:40.925 "listen_address": { 00:19:40.925 "trtype": "TCP", 00:19:40.925 "adrfam": "IPv4", 00:19:40.925 "traddr": "10.0.0.2", 00:19:40.925 "trsvcid": "4420" 00:19:40.925 }, 00:19:40.925 "peer_address": { 00:19:40.925 "trtype": "TCP", 00:19:40.925 "adrfam": "IPv4", 00:19:40.925 "traddr": "10.0.0.1", 00:19:40.925 "trsvcid": "44856" 00:19:40.925 }, 00:19:40.925 "auth": { 00:19:40.925 "state": "completed", 00:19:40.925 "digest": "sha256", 00:19:40.925 "dhgroup": "ffdhe4096" 00:19:40.925 } 00:19:40.925 } 00:19:40.925 ]' 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.925 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.183 23:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:19:42.117 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.375 23:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.633 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.199 00:19:43.199 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.199 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.199 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.457 { 00:19:43.457 "cntlid": 33, 00:19:43.457 "qid": 0, 00:19:43.457 "state": "enabled", 00:19:43.457 "listen_address": { 00:19:43.457 "trtype": "TCP", 00:19:43.457 "adrfam": "IPv4", 00:19:43.457 "traddr": "10.0.0.2", 00:19:43.457 "trsvcid": "4420" 00:19:43.457 }, 00:19:43.457 "peer_address": { 00:19:43.457 "trtype": "TCP", 00:19:43.457 "adrfam": "IPv4", 00:19:43.457 "traddr": "10.0.0.1", 00:19:43.457 "trsvcid": "44892" 00:19:43.457 }, 00:19:43.457 "auth": { 00:19:43.457 "state": "completed", 00:19:43.457 "digest": "sha256", 00:19:43.457 "dhgroup": "ffdhe6144" 00:19:43.457 } 00:19:43.457 } 00:19:43.457 ]' 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.457 23:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.457 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.457 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.457 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.457 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.457 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.715 23:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.648 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.906 23:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.471 00:19:45.471 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.471 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.471 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.729 { 00:19:45.729 "cntlid": 35, 00:19:45.729 "qid": 0, 00:19:45.729 "state": "enabled", 00:19:45.729 "listen_address": { 00:19:45.729 "trtype": "TCP", 00:19:45.729 "adrfam": "IPv4", 00:19:45.729 "traddr": "10.0.0.2", 00:19:45.729 "trsvcid": "4420" 00:19:45.729 }, 00:19:45.729 "peer_address": { 00:19:45.729 "trtype": "TCP", 00:19:45.729 "adrfam": "IPv4", 00:19:45.729 "traddr": "10.0.0.1", 00:19:45.729 "trsvcid": "44924" 00:19:45.729 }, 00:19:45.729 "auth": { 00:19:45.729 "state": "completed", 00:19:45.729 "digest": "sha256", 00:19:45.729 "dhgroup": "ffdhe6144" 00:19:45.729 } 00:19:45.729 } 00:19:45.729 ]' 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.729 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.987 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.987 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.987 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.987 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.987 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.245 23:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.178 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.436 23:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.436 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.001 00:19:48.001 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.001 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.001 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.259 { 00:19:48.259 "cntlid": 37, 00:19:48.259 "qid": 0, 00:19:48.259 "state": "enabled", 00:19:48.259 "listen_address": { 00:19:48.259 "trtype": "TCP", 00:19:48.259 "adrfam": "IPv4", 00:19:48.259 "traddr": "10.0.0.2", 00:19:48.259 "trsvcid": "4420" 00:19:48.259 }, 00:19:48.259 "peer_address": { 00:19:48.259 "trtype": "TCP", 00:19:48.259 "adrfam": "IPv4", 00:19:48.259 "traddr": "10.0.0.1", 00:19:48.259 "trsvcid": "44944" 00:19:48.259 }, 00:19:48.259 "auth": { 00:19:48.259 "state": "completed", 00:19:48.259 "digest": "sha256", 00:19:48.259 "dhgroup": "ffdhe6144" 00:19:48.259 } 00:19:48.259 } 00:19:48.259 ]' 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.259 23:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.517 23:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:19:49.450 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.708 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.966 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.531 00:19:50.531 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.531 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.531 23:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.531 { 00:19:50.531 "cntlid": 39, 00:19:50.531 "qid": 0, 00:19:50.531 "state": "enabled", 00:19:50.531 "listen_address": { 00:19:50.531 "trtype": "TCP", 00:19:50.531 "adrfam": "IPv4", 00:19:50.531 "traddr": "10.0.0.2", 00:19:50.531 "trsvcid": "4420" 00:19:50.531 }, 00:19:50.531 "peer_address": { 00:19:50.531 "trtype": "TCP", 00:19:50.531 "adrfam": "IPv4", 00:19:50.531 "traddr": "10.0.0.1", 00:19:50.531 "trsvcid": "49030" 00:19:50.531 }, 00:19:50.531 "auth": { 00:19:50.531 "state": "completed", 00:19:50.531 "digest": "sha256", 00:19:50.531 "dhgroup": "ffdhe6144" 00:19:50.531 } 00:19:50.531 } 00:19:50.531 ]' 00:19:50.531 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.788 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.046 23:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.977 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.234 23:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.166 00:19:53.166 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.166 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.166 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.423 { 00:19:53.423 "cntlid": 41, 00:19:53.423 "qid": 0, 00:19:53.423 "state": "enabled", 00:19:53.423 "listen_address": { 00:19:53.423 "trtype": "TCP", 00:19:53.423 "adrfam": "IPv4", 00:19:53.423 "traddr": "10.0.0.2", 00:19:53.423 "trsvcid": "4420" 00:19:53.423 }, 00:19:53.423 "peer_address": { 00:19:53.423 "trtype": "TCP", 00:19:53.423 "adrfam": "IPv4", 00:19:53.423 "traddr": "10.0.0.1", 00:19:53.423 "trsvcid": "49064" 00:19:53.423 }, 00:19:53.423 "auth": { 00:19:53.423 "state": "completed", 00:19:53.423 "digest": "sha256", 00:19:53.423 "dhgroup": "ffdhe8192" 00:19:53.423 } 00:19:53.423 } 00:19:53.423 ]' 00:19:53.423 23:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.423 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.680 23:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.053 00:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.985 00:19:55.985 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.985 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.985 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.242 { 00:19:56.242 "cntlid": 43, 00:19:56.242 "qid": 0, 00:19:56.242 "state": "enabled", 00:19:56.242 "listen_address": { 00:19:56.242 "trtype": "TCP", 00:19:56.242 "adrfam": "IPv4", 00:19:56.242 "traddr": "10.0.0.2", 00:19:56.242 "trsvcid": "4420" 00:19:56.242 }, 00:19:56.242 "peer_address": { 00:19:56.242 "trtype": "TCP", 00:19:56.242 "adrfam": "IPv4", 00:19:56.242 "traddr": "10.0.0.1", 00:19:56.242 "trsvcid": "49090" 00:19:56.242 }, 00:19:56.242 "auth": { 00:19:56.242 "state": "completed", 00:19:56.242 "digest": "sha256", 00:19:56.242 "dhgroup": "ffdhe8192" 00:19:56.242 } 00:19:56.242 } 00:19:56.242 ]' 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.242 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.498 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.498 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.498 00:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.754 00:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.693 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.694 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.694 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.959 00:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.889 00:19:58.889 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.889 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.889 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.146 { 00:19:59.146 "cntlid": 45, 00:19:59.146 "qid": 0, 00:19:59.146 "state": "enabled", 00:19:59.146 "listen_address": { 00:19:59.146 "trtype": "TCP", 00:19:59.146 "adrfam": "IPv4", 00:19:59.146 "traddr": "10.0.0.2", 00:19:59.146 "trsvcid": "4420" 00:19:59.146 }, 00:19:59.146 "peer_address": { 00:19:59.146 "trtype": "TCP", 00:19:59.146 "adrfam": "IPv4", 00:19:59.146 "traddr": "10.0.0.1", 00:19:59.146 "trsvcid": "49120" 00:19:59.146 }, 00:19:59.146 "auth": { 00:19:59.146 "state": "completed", 00:19:59.146 "digest": "sha256", 00:19:59.146 "dhgroup": "ffdhe8192" 00:19:59.146 } 00:19:59.146 } 00:19:59.146 ]' 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.146 00:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.710 00:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.641 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.898 00:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.828 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.828 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.828 { 00:20:01.828 "cntlid": 47, 00:20:01.828 "qid": 0, 00:20:01.829 "state": "enabled", 00:20:01.829 "listen_address": { 00:20:01.829 "trtype": "TCP", 00:20:01.829 "adrfam": "IPv4", 00:20:01.829 "traddr": "10.0.0.2", 00:20:01.829 "trsvcid": "4420" 00:20:01.829 }, 00:20:01.829 "peer_address": { 00:20:01.829 "trtype": "TCP", 00:20:01.829 "adrfam": "IPv4", 00:20:01.829 "traddr": "10.0.0.1", 00:20:01.829 "trsvcid": "58816" 00:20:01.829 }, 00:20:01.829 "auth": { 00:20:01.829 "state": "completed", 00:20:01.829 "digest": "sha256", 00:20:01.829 "dhgroup": "ffdhe8192" 00:20:01.829 } 00:20:01.829 } 00:20:01.829 ]' 00:20:01.829 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.086 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.342 00:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:03.273 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.273 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.273 00:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.273 00:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.273 00:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.274 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:03.274 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.274 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.274 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.274 00:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.534 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.796 00:20:03.796 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.796 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.796 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.053 { 00:20:04.053 "cntlid": 49, 00:20:04.053 "qid": 0, 00:20:04.053 "state": "enabled", 00:20:04.053 "listen_address": { 00:20:04.053 "trtype": "TCP", 00:20:04.053 "adrfam": "IPv4", 00:20:04.053 "traddr": "10.0.0.2", 00:20:04.053 "trsvcid": "4420" 00:20:04.053 }, 00:20:04.053 "peer_address": { 00:20:04.053 "trtype": "TCP", 00:20:04.053 "adrfam": "IPv4", 00:20:04.053 "traddr": "10.0.0.1", 00:20:04.053 "trsvcid": "58838" 00:20:04.053 }, 00:20:04.053 "auth": { 00:20:04.053 "state": "completed", 00:20:04.053 "digest": "sha384", 00:20:04.053 "dhgroup": "null" 00:20:04.053 } 00:20:04.053 } 00:20:04.053 ]' 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.053 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.310 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:04.310 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.310 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.310 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.310 00:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.567 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.498 00:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.755 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.012 00:20:06.012 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.012 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.012 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.269 { 00:20:06.269 "cntlid": 51, 00:20:06.269 "qid": 0, 00:20:06.269 "state": "enabled", 00:20:06.269 "listen_address": { 00:20:06.269 "trtype": "TCP", 00:20:06.269 "adrfam": "IPv4", 00:20:06.269 "traddr": "10.0.0.2", 00:20:06.269 "trsvcid": "4420" 00:20:06.269 }, 00:20:06.269 "peer_address": { 00:20:06.269 "trtype": "TCP", 00:20:06.269 "adrfam": "IPv4", 00:20:06.269 "traddr": "10.0.0.1", 00:20:06.269 "trsvcid": "58852" 00:20:06.269 }, 00:20:06.269 "auth": { 00:20:06.269 "state": "completed", 00:20:06.269 "digest": "sha384", 00:20:06.269 "dhgroup": "null" 00:20:06.269 } 00:20:06.269 } 00:20:06.269 ]' 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.269 00:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.527 00:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.460 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.718 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.975 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.232 { 00:20:08.232 "cntlid": 53, 00:20:08.232 "qid": 0, 00:20:08.232 "state": "enabled", 00:20:08.232 "listen_address": { 00:20:08.232 "trtype": "TCP", 00:20:08.232 "adrfam": "IPv4", 00:20:08.232 "traddr": "10.0.0.2", 00:20:08.232 "trsvcid": "4420" 00:20:08.232 }, 00:20:08.232 "peer_address": { 00:20:08.232 "trtype": "TCP", 00:20:08.232 "adrfam": "IPv4", 00:20:08.232 "traddr": "10.0.0.1", 00:20:08.232 "trsvcid": "58870" 00:20:08.232 }, 00:20:08.232 "auth": { 00:20:08.232 "state": "completed", 00:20:08.232 "digest": "sha384", 00:20:08.232 "dhgroup": "null" 00:20:08.232 } 00:20:08.232 } 00:20:08.232 ]' 00:20:08.232 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.489 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.489 00:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.489 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.489 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.489 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.489 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.489 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.746 00:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.678 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.935 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:09.935 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.935 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.935 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.935 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.936 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.193 00:20:10.193 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.193 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.193 00:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.451 { 00:20:10.451 "cntlid": 55, 00:20:10.451 "qid": 0, 00:20:10.451 "state": "enabled", 00:20:10.451 "listen_address": { 00:20:10.451 "trtype": "TCP", 00:20:10.451 "adrfam": "IPv4", 00:20:10.451 "traddr": "10.0.0.2", 00:20:10.451 "trsvcid": "4420" 00:20:10.451 }, 00:20:10.451 "peer_address": { 00:20:10.451 "trtype": "TCP", 00:20:10.451 "adrfam": "IPv4", 00:20:10.451 "traddr": "10.0.0.1", 00:20:10.451 "trsvcid": "39522" 00:20:10.451 }, 00:20:10.451 "auth": { 00:20:10.451 "state": "completed", 00:20:10.451 "digest": "sha384", 00:20:10.451 "dhgroup": "null" 00:20:10.451 } 00:20:10.451 } 00:20:10.451 ]' 00:20:10.451 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.707 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.963 00:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.895 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.152 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.153 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.153 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.153 00:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.153 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.153 00:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.412 00:20:12.412 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.412 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.412 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.670 { 00:20:12.670 "cntlid": 57, 00:20:12.670 "qid": 0, 00:20:12.670 "state": "enabled", 00:20:12.670 "listen_address": { 00:20:12.670 "trtype": "TCP", 00:20:12.670 "adrfam": "IPv4", 00:20:12.670 "traddr": "10.0.0.2", 00:20:12.670 "trsvcid": "4420" 00:20:12.670 }, 00:20:12.670 "peer_address": { 00:20:12.670 "trtype": "TCP", 00:20:12.670 "adrfam": "IPv4", 00:20:12.670 "traddr": "10.0.0.1", 00:20:12.670 "trsvcid": "39554" 00:20:12.670 }, 00:20:12.670 "auth": { 00:20:12.670 "state": "completed", 00:20:12.670 "digest": "sha384", 00:20:12.670 "dhgroup": "ffdhe2048" 00:20:12.670 } 00:20:12.670 } 00:20:12.670 ]' 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.670 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.927 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.927 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.927 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.185 00:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:14.116 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.117 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.375 00:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.632 00:20:14.632 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.632 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.632 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.890 { 00:20:14.890 "cntlid": 59, 00:20:14.890 "qid": 0, 00:20:14.890 "state": "enabled", 00:20:14.890 "listen_address": { 00:20:14.890 "trtype": "TCP", 00:20:14.890 "adrfam": "IPv4", 00:20:14.890 "traddr": "10.0.0.2", 00:20:14.890 "trsvcid": "4420" 00:20:14.890 }, 00:20:14.890 "peer_address": { 00:20:14.890 "trtype": "TCP", 00:20:14.890 "adrfam": "IPv4", 00:20:14.890 "traddr": "10.0.0.1", 00:20:14.890 "trsvcid": "39580" 00:20:14.890 }, 00:20:14.890 "auth": { 00:20:14.890 "state": "completed", 00:20:14.890 "digest": "sha384", 00:20:14.890 "dhgroup": "ffdhe2048" 00:20:14.890 } 00:20:14.890 } 00:20:14.890 ]' 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.890 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.150 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.150 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.150 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.150 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.150 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.409 00:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.341 00:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.598 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.856 00:20:16.856 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.856 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.856 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.124 { 00:20:17.124 "cntlid": 61, 00:20:17.124 "qid": 0, 00:20:17.124 "state": "enabled", 00:20:17.124 "listen_address": { 00:20:17.124 "trtype": "TCP", 00:20:17.124 "adrfam": "IPv4", 00:20:17.124 "traddr": "10.0.0.2", 00:20:17.124 "trsvcid": "4420" 00:20:17.124 }, 00:20:17.124 "peer_address": { 00:20:17.124 "trtype": "TCP", 00:20:17.124 "adrfam": "IPv4", 00:20:17.124 "traddr": "10.0.0.1", 00:20:17.124 "trsvcid": "39614" 00:20:17.124 }, 00:20:17.124 "auth": { 00:20:17.124 "state": "completed", 00:20:17.124 "digest": "sha384", 00:20:17.124 "dhgroup": "ffdhe2048" 00:20:17.124 } 00:20:17.124 } 00:20:17.124 ]' 00:20:17.124 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.383 00:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.640 00:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.571 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.829 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.086 00:20:19.086 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.086 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.086 00:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.342 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.598 { 00:20:19.598 "cntlid": 63, 00:20:19.598 "qid": 0, 00:20:19.598 "state": "enabled", 00:20:19.598 "listen_address": { 00:20:19.598 "trtype": "TCP", 00:20:19.598 "adrfam": "IPv4", 00:20:19.598 "traddr": "10.0.0.2", 00:20:19.598 "trsvcid": "4420" 00:20:19.598 }, 00:20:19.598 "peer_address": { 00:20:19.598 "trtype": "TCP", 00:20:19.598 "adrfam": "IPv4", 00:20:19.598 "traddr": "10.0.0.1", 00:20:19.598 "trsvcid": "39642" 00:20:19.598 }, 00:20:19.598 "auth": { 00:20:19.598 "state": "completed", 00:20:19.598 "digest": "sha384", 00:20:19.598 "dhgroup": "ffdhe2048" 00:20:19.598 } 00:20:19.598 } 00:20:19.598 ]' 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.598 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.855 00:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.788 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.045 00:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.610 00:20:21.610 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.610 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.610 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.867 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.867 { 00:20:21.867 "cntlid": 65, 00:20:21.867 "qid": 0, 00:20:21.867 "state": "enabled", 00:20:21.867 "listen_address": { 00:20:21.867 "trtype": "TCP", 00:20:21.867 "adrfam": "IPv4", 00:20:21.867 "traddr": "10.0.0.2", 00:20:21.867 "trsvcid": "4420" 00:20:21.867 }, 00:20:21.867 "peer_address": { 00:20:21.867 "trtype": "TCP", 00:20:21.867 "adrfam": "IPv4", 00:20:21.867 "traddr": "10.0.0.1", 00:20:21.867 "trsvcid": "35854" 00:20:21.867 }, 00:20:21.867 "auth": { 00:20:21.867 "state": "completed", 00:20:21.867 "digest": "sha384", 00:20:21.867 "dhgroup": "ffdhe3072" 00:20:21.868 } 00:20:21.868 } 00:20:21.868 ]' 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.868 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.125 00:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.057 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.315 00:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.880 00:20:23.880 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.880 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.880 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.138 { 00:20:24.138 "cntlid": 67, 00:20:24.138 "qid": 0, 00:20:24.138 "state": "enabled", 00:20:24.138 "listen_address": { 00:20:24.138 "trtype": "TCP", 00:20:24.138 "adrfam": "IPv4", 00:20:24.138 "traddr": "10.0.0.2", 00:20:24.138 "trsvcid": "4420" 00:20:24.138 }, 00:20:24.138 "peer_address": { 00:20:24.138 "trtype": "TCP", 00:20:24.138 "adrfam": "IPv4", 00:20:24.138 "traddr": "10.0.0.1", 00:20:24.138 "trsvcid": "35876" 00:20:24.138 }, 00:20:24.138 "auth": { 00:20:24.138 "state": "completed", 00:20:24.138 "digest": "sha384", 00:20:24.138 "dhgroup": "ffdhe3072" 00:20:24.138 } 00:20:24.138 } 00:20:24.138 ]' 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.138 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.395 00:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.327 00:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.585 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.842 00:20:25.842 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.842 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.842 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.408 { 00:20:26.408 "cntlid": 69, 00:20:26.408 "qid": 0, 00:20:26.408 "state": "enabled", 00:20:26.408 "listen_address": { 00:20:26.408 "trtype": "TCP", 00:20:26.408 "adrfam": "IPv4", 00:20:26.408 "traddr": "10.0.0.2", 00:20:26.408 "trsvcid": "4420" 00:20:26.408 }, 00:20:26.408 "peer_address": { 00:20:26.408 "trtype": "TCP", 00:20:26.408 "adrfam": "IPv4", 00:20:26.408 "traddr": "10.0.0.1", 00:20:26.408 "trsvcid": "35896" 00:20:26.408 }, 00:20:26.408 "auth": { 00:20:26.408 "state": "completed", 00:20:26.408 "digest": "sha384", 00:20:26.408 "dhgroup": "ffdhe3072" 00:20:26.408 } 00:20:26.408 } 00:20:26.408 ]' 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.408 00:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.666 00:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.598 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.857 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.118 00:20:28.118 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.118 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.118 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.375 { 00:20:28.375 "cntlid": 71, 00:20:28.375 "qid": 0, 00:20:28.375 "state": "enabled", 00:20:28.375 "listen_address": { 00:20:28.375 "trtype": "TCP", 00:20:28.375 "adrfam": "IPv4", 00:20:28.375 "traddr": "10.0.0.2", 00:20:28.375 "trsvcid": "4420" 00:20:28.375 }, 00:20:28.375 "peer_address": { 00:20:28.375 "trtype": "TCP", 00:20:28.375 "adrfam": "IPv4", 00:20:28.375 "traddr": "10.0.0.1", 00:20:28.375 "trsvcid": "35928" 00:20:28.375 }, 00:20:28.375 "auth": { 00:20:28.375 "state": "completed", 00:20:28.375 "digest": "sha384", 00:20:28.375 "dhgroup": "ffdhe3072" 00:20:28.375 } 00:20:28.375 } 00:20:28.375 ]' 00:20:28.375 00:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.375 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.375 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.375 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.376 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.633 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.633 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.633 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.633 00:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.005 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.262 00:20:30.262 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.262 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.262 00:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.520 { 00:20:30.520 "cntlid": 73, 00:20:30.520 "qid": 0, 00:20:30.520 "state": "enabled", 00:20:30.520 "listen_address": { 00:20:30.520 "trtype": "TCP", 00:20:30.520 "adrfam": "IPv4", 00:20:30.520 "traddr": "10.0.0.2", 00:20:30.520 "trsvcid": "4420" 00:20:30.520 }, 00:20:30.520 "peer_address": { 00:20:30.520 "trtype": "TCP", 00:20:30.520 "adrfam": "IPv4", 00:20:30.520 "traddr": "10.0.0.1", 00:20:30.520 "trsvcid": "32980" 00:20:30.520 }, 00:20:30.520 "auth": { 00:20:30.520 "state": "completed", 00:20:30.520 "digest": "sha384", 00:20:30.520 "dhgroup": "ffdhe4096" 00:20:30.520 } 00:20:30.520 } 00:20:30.520 ]' 00:20:30.520 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.777 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.034 00:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:31.966 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.966 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:31.966 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.966 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.966 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.967 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.967 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.967 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.223 00:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.790 00:20:32.790 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.790 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.790 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.083 { 00:20:33.083 "cntlid": 75, 00:20:33.083 "qid": 0, 00:20:33.083 "state": "enabled", 00:20:33.083 "listen_address": { 00:20:33.083 "trtype": "TCP", 00:20:33.083 "adrfam": "IPv4", 00:20:33.083 "traddr": "10.0.0.2", 00:20:33.083 "trsvcid": "4420" 00:20:33.083 }, 00:20:33.083 "peer_address": { 00:20:33.083 "trtype": "TCP", 00:20:33.083 "adrfam": "IPv4", 00:20:33.083 "traddr": "10.0.0.1", 00:20:33.083 "trsvcid": "33012" 00:20:33.083 }, 00:20:33.083 "auth": { 00:20:33.083 "state": "completed", 00:20:33.083 "digest": "sha384", 00:20:33.083 "dhgroup": "ffdhe4096" 00:20:33.083 } 00:20:33.083 } 00:20:33.083 ]' 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.083 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.340 00:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.269 00:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.526 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.089 00:20:35.089 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.089 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.089 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.345 { 00:20:35.345 "cntlid": 77, 00:20:35.345 "qid": 0, 00:20:35.345 "state": "enabled", 00:20:35.345 "listen_address": { 00:20:35.345 "trtype": "TCP", 00:20:35.345 "adrfam": "IPv4", 00:20:35.345 "traddr": "10.0.0.2", 00:20:35.345 "trsvcid": "4420" 00:20:35.345 }, 00:20:35.345 "peer_address": { 00:20:35.345 "trtype": "TCP", 00:20:35.345 "adrfam": "IPv4", 00:20:35.345 "traddr": "10.0.0.1", 00:20:35.345 "trsvcid": "33034" 00:20:35.345 }, 00:20:35.345 "auth": { 00:20:35.345 "state": "completed", 00:20:35.345 "digest": "sha384", 00:20:35.345 "dhgroup": "ffdhe4096" 00:20:35.345 } 00:20:35.345 } 00:20:35.345 ]' 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.345 00:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.603 00:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.972 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.973 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.229 00:20:37.229 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.229 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.229 00:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.486 { 00:20:37.486 "cntlid": 79, 00:20:37.486 "qid": 0, 00:20:37.486 "state": "enabled", 00:20:37.486 "listen_address": { 00:20:37.486 "trtype": "TCP", 00:20:37.486 "adrfam": "IPv4", 00:20:37.486 "traddr": "10.0.0.2", 00:20:37.486 "trsvcid": "4420" 00:20:37.486 }, 00:20:37.486 "peer_address": { 00:20:37.486 "trtype": "TCP", 00:20:37.486 "adrfam": "IPv4", 00:20:37.486 "traddr": "10.0.0.1", 00:20:37.486 "trsvcid": "33052" 00:20:37.486 }, 00:20:37.486 "auth": { 00:20:37.486 "state": "completed", 00:20:37.486 "digest": "sha384", 00:20:37.486 "dhgroup": "ffdhe4096" 00:20:37.486 } 00:20:37.486 } 00:20:37.486 ]' 00:20:37.486 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.743 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.001 00:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.933 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.191 00:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.755 00:20:39.755 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.755 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.755 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.012 { 00:20:40.012 "cntlid": 81, 00:20:40.012 "qid": 0, 00:20:40.012 "state": "enabled", 00:20:40.012 "listen_address": { 00:20:40.012 "trtype": "TCP", 00:20:40.012 "adrfam": "IPv4", 00:20:40.012 "traddr": "10.0.0.2", 00:20:40.012 "trsvcid": "4420" 00:20:40.012 }, 00:20:40.012 "peer_address": { 00:20:40.012 "trtype": "TCP", 00:20:40.012 "adrfam": "IPv4", 00:20:40.012 "traddr": "10.0.0.1", 00:20:40.012 "trsvcid": "33068" 00:20:40.012 }, 00:20:40.012 "auth": { 00:20:40.012 "state": "completed", 00:20:40.012 "digest": "sha384", 00:20:40.012 "dhgroup": "ffdhe6144" 00:20:40.012 } 00:20:40.012 } 00:20:40.012 ]' 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.012 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.269 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.269 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.269 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.269 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.269 00:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.525 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.457 00:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.714 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.280 00:20:42.280 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.280 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.280 00:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.537 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.537 { 00:20:42.537 "cntlid": 83, 00:20:42.537 "qid": 0, 00:20:42.537 "state": "enabled", 00:20:42.538 "listen_address": { 00:20:42.538 "trtype": "TCP", 00:20:42.538 "adrfam": "IPv4", 00:20:42.538 "traddr": "10.0.0.2", 00:20:42.538 "trsvcid": "4420" 00:20:42.538 }, 00:20:42.538 "peer_address": { 00:20:42.538 "trtype": "TCP", 00:20:42.538 "adrfam": "IPv4", 00:20:42.538 "traddr": "10.0.0.1", 00:20:42.538 "trsvcid": "36644" 00:20:42.538 }, 00:20:42.538 "auth": { 00:20:42.538 "state": "completed", 00:20:42.538 "digest": "sha384", 00:20:42.538 "dhgroup": "ffdhe6144" 00:20:42.538 } 00:20:42.538 } 00:20:42.538 ]' 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.538 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.795 00:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:43.726 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.982 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.238 00:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.801 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.801 { 00:20:44.801 "cntlid": 85, 00:20:44.801 "qid": 0, 00:20:44.801 "state": "enabled", 00:20:44.801 "listen_address": { 00:20:44.801 "trtype": "TCP", 00:20:44.801 "adrfam": "IPv4", 00:20:44.801 "traddr": "10.0.0.2", 00:20:44.801 "trsvcid": "4420" 00:20:44.801 }, 00:20:44.801 "peer_address": { 00:20:44.801 "trtype": "TCP", 00:20:44.801 "adrfam": "IPv4", 00:20:44.801 "traddr": "10.0.0.1", 00:20:44.801 "trsvcid": "36676" 00:20:44.801 }, 00:20:44.801 "auth": { 00:20:44.801 "state": "completed", 00:20:44.801 "digest": "sha384", 00:20:44.801 "dhgroup": "ffdhe6144" 00:20:44.801 } 00:20:44.801 } 00:20:44.801 ]' 00:20:44.801 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.058 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.316 00:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.246 00:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.502 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.065 00:20:47.322 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.322 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.322 00:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.580 { 00:20:47.580 "cntlid": 87, 00:20:47.580 "qid": 0, 00:20:47.580 "state": "enabled", 00:20:47.580 "listen_address": { 00:20:47.580 "trtype": "TCP", 00:20:47.580 "adrfam": "IPv4", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "trsvcid": "4420" 00:20:47.580 }, 00:20:47.580 "peer_address": { 00:20:47.580 "trtype": "TCP", 00:20:47.580 "adrfam": "IPv4", 00:20:47.580 "traddr": "10.0.0.1", 00:20:47.580 "trsvcid": "36708" 00:20:47.580 }, 00:20:47.580 "auth": { 00:20:47.580 "state": "completed", 00:20:47.580 "digest": "sha384", 00:20:47.580 "dhgroup": "ffdhe6144" 00:20:47.580 } 00:20:47.580 } 00:20:47.580 ]' 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.580 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.836 00:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.767 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.025 00:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.956 00:20:49.956 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.956 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.956 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.213 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.213 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.213 00:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.213 00:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.214 { 00:20:50.214 "cntlid": 89, 00:20:50.214 "qid": 0, 00:20:50.214 "state": "enabled", 00:20:50.214 "listen_address": { 00:20:50.214 "trtype": "TCP", 00:20:50.214 "adrfam": "IPv4", 00:20:50.214 "traddr": "10.0.0.2", 00:20:50.214 "trsvcid": "4420" 00:20:50.214 }, 00:20:50.214 "peer_address": { 00:20:50.214 "trtype": "TCP", 00:20:50.214 "adrfam": "IPv4", 00:20:50.214 "traddr": "10.0.0.1", 00:20:50.214 "trsvcid": "36738" 00:20:50.214 }, 00:20:50.214 "auth": { 00:20:50.214 "state": "completed", 00:20:50.214 "digest": "sha384", 00:20:50.214 "dhgroup": "ffdhe8192" 00:20:50.214 } 00:20:50.214 } 00:20:50.214 ]' 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.214 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.471 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.471 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.471 00:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.737 00:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:20:51.728 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.729 00:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.661 00:20:52.661 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.661 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.661 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.918 { 00:20:52.918 "cntlid": 91, 00:20:52.918 "qid": 0, 00:20:52.918 "state": "enabled", 00:20:52.918 "listen_address": { 00:20:52.918 "trtype": "TCP", 00:20:52.918 "adrfam": "IPv4", 00:20:52.918 "traddr": "10.0.0.2", 00:20:52.918 "trsvcid": "4420" 00:20:52.918 }, 00:20:52.918 "peer_address": { 00:20:52.918 "trtype": "TCP", 00:20:52.918 "adrfam": "IPv4", 00:20:52.918 "traddr": "10.0.0.1", 00:20:52.918 "trsvcid": "42556" 00:20:52.918 }, 00:20:52.918 "auth": { 00:20:52.918 "state": "completed", 00:20:52.918 "digest": "sha384", 00:20:52.918 "dhgroup": "ffdhe8192" 00:20:52.918 } 00:20:52.918 } 00:20:52.918 ]' 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.918 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.175 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.175 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.175 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.175 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.175 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.432 00:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.365 00:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.623 00:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.554 00:20:55.554 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.554 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.554 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.812 { 00:20:55.812 "cntlid": 93, 00:20:55.812 "qid": 0, 00:20:55.812 "state": "enabled", 00:20:55.812 "listen_address": { 00:20:55.812 "trtype": "TCP", 00:20:55.812 "adrfam": "IPv4", 00:20:55.812 "traddr": "10.0.0.2", 00:20:55.812 "trsvcid": "4420" 00:20:55.812 }, 00:20:55.812 "peer_address": { 00:20:55.812 "trtype": "TCP", 00:20:55.812 "adrfam": "IPv4", 00:20:55.812 "traddr": "10.0.0.1", 00:20:55.812 "trsvcid": "42576" 00:20:55.812 }, 00:20:55.812 "auth": { 00:20:55.812 "state": "completed", 00:20:55.812 "digest": "sha384", 00:20:55.812 "dhgroup": "ffdhe8192" 00:20:55.812 } 00:20:55.812 } 00:20:55.812 ]' 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.812 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.070 00:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.004 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.262 00:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.194 00:20:58.194 00:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.194 00:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.194 00:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.451 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.451 { 00:20:58.451 "cntlid": 95, 00:20:58.451 "qid": 0, 00:20:58.451 "state": "enabled", 00:20:58.452 "listen_address": { 00:20:58.452 "trtype": "TCP", 00:20:58.452 "adrfam": "IPv4", 00:20:58.452 "traddr": "10.0.0.2", 00:20:58.452 "trsvcid": "4420" 00:20:58.452 }, 00:20:58.452 "peer_address": { 00:20:58.452 "trtype": "TCP", 00:20:58.452 "adrfam": "IPv4", 00:20:58.452 "traddr": "10.0.0.1", 00:20:58.452 "trsvcid": "42594" 00:20:58.452 }, 00:20:58.452 "auth": { 00:20:58.452 "state": "completed", 00:20:58.452 "digest": "sha384", 00:20:58.452 "dhgroup": "ffdhe8192" 00:20:58.452 } 00:20:58.452 } 00:20:58.452 ]' 00:20:58.452 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.452 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.452 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.709 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.709 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.709 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.709 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.709 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.966 00:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:59.899 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.900 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.900 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.900 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.159 00:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.418 00:21:00.418 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.418 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.418 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.676 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.676 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.676 00:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.676 00:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.934 { 00:21:00.934 "cntlid": 97, 00:21:00.934 "qid": 0, 00:21:00.934 "state": "enabled", 00:21:00.934 "listen_address": { 00:21:00.934 "trtype": "TCP", 00:21:00.934 "adrfam": "IPv4", 00:21:00.934 "traddr": "10.0.0.2", 00:21:00.934 "trsvcid": "4420" 00:21:00.934 }, 00:21:00.934 "peer_address": { 00:21:00.934 "trtype": "TCP", 00:21:00.934 "adrfam": "IPv4", 00:21:00.934 "traddr": "10.0.0.1", 00:21:00.934 "trsvcid": "33598" 00:21:00.934 }, 00:21:00.934 "auth": { 00:21:00.934 "state": "completed", 00:21:00.934 "digest": "sha512", 00:21:00.934 "dhgroup": "null" 00:21:00.934 } 00:21:00.934 } 00:21:00.934 ]' 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.934 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.193 00:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.125 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.383 00:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.383 00:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.383 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.383 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.641 00:21:02.641 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.641 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.641 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.899 { 00:21:02.899 "cntlid": 99, 00:21:02.899 "qid": 0, 00:21:02.899 "state": "enabled", 00:21:02.899 "listen_address": { 00:21:02.899 "trtype": "TCP", 00:21:02.899 "adrfam": "IPv4", 00:21:02.899 "traddr": "10.0.0.2", 00:21:02.899 "trsvcid": "4420" 00:21:02.899 }, 00:21:02.899 "peer_address": { 00:21:02.899 "trtype": "TCP", 00:21:02.899 "adrfam": "IPv4", 00:21:02.899 "traddr": "10.0.0.1", 00:21:02.899 "trsvcid": "33624" 00:21:02.899 }, 00:21:02.899 "auth": { 00:21:02.899 "state": "completed", 00:21:02.899 "digest": "sha512", 00:21:02.899 "dhgroup": "null" 00:21:02.899 } 00:21:02.899 } 00:21:02.899 ]' 00:21:02.899 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.158 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.416 00:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:04.350 00:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.350 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.608 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.174 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.174 { 00:21:05.174 "cntlid": 101, 00:21:05.174 "qid": 0, 00:21:05.174 "state": "enabled", 00:21:05.174 "listen_address": { 00:21:05.174 "trtype": "TCP", 00:21:05.174 "adrfam": "IPv4", 00:21:05.174 "traddr": "10.0.0.2", 00:21:05.174 "trsvcid": "4420" 00:21:05.174 }, 00:21:05.174 "peer_address": { 00:21:05.174 "trtype": "TCP", 00:21:05.174 "adrfam": "IPv4", 00:21:05.174 "traddr": "10.0.0.1", 00:21:05.174 "trsvcid": "33650" 00:21:05.174 }, 00:21:05.174 "auth": { 00:21:05.174 "state": "completed", 00:21:05.174 "digest": "sha512", 00:21:05.174 "dhgroup": "null" 00:21:05.174 } 00:21:05.174 } 00:21:05.174 ]' 00:21:05.174 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.431 00:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.689 00:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.620 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.878 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.136 00:21:07.136 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.136 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.136 00:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.394 { 00:21:07.394 "cntlid": 103, 00:21:07.394 "qid": 0, 00:21:07.394 "state": "enabled", 00:21:07.394 "listen_address": { 00:21:07.394 "trtype": "TCP", 00:21:07.394 "adrfam": "IPv4", 00:21:07.394 "traddr": "10.0.0.2", 00:21:07.394 "trsvcid": "4420" 00:21:07.394 }, 00:21:07.394 "peer_address": { 00:21:07.394 "trtype": "TCP", 00:21:07.394 "adrfam": "IPv4", 00:21:07.394 "traddr": "10.0.0.1", 00:21:07.394 "trsvcid": "33660" 00:21:07.394 }, 00:21:07.394 "auth": { 00:21:07.394 "state": "completed", 00:21:07.394 "digest": "sha512", 00:21:07.394 "dhgroup": "null" 00:21:07.394 } 00:21:07.394 } 00:21:07.394 ]' 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.394 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.652 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.652 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.652 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.652 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.652 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.909 00:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.871 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.130 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.388 00:21:09.388 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.388 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.388 00:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.647 { 00:21:09.647 "cntlid": 105, 00:21:09.647 "qid": 0, 00:21:09.647 "state": "enabled", 00:21:09.647 "listen_address": { 00:21:09.647 "trtype": "TCP", 00:21:09.647 "adrfam": "IPv4", 00:21:09.647 "traddr": "10.0.0.2", 00:21:09.647 "trsvcid": "4420" 00:21:09.647 }, 00:21:09.647 "peer_address": { 00:21:09.647 "trtype": "TCP", 00:21:09.647 "adrfam": "IPv4", 00:21:09.647 "traddr": "10.0.0.1", 00:21:09.647 "trsvcid": "33692" 00:21:09.647 }, 00:21:09.647 "auth": { 00:21:09.647 "state": "completed", 00:21:09.647 "digest": "sha512", 00:21:09.647 "dhgroup": "ffdhe2048" 00:21:09.647 } 00:21:09.647 } 00:21:09.647 ]' 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.647 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.905 00:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.839 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.096 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:11.096 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.096 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.096 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.097 00:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.663 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.663 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.663 { 00:21:11.663 "cntlid": 107, 00:21:11.663 "qid": 0, 00:21:11.663 "state": "enabled", 00:21:11.663 "listen_address": { 00:21:11.663 "trtype": "TCP", 00:21:11.663 "adrfam": "IPv4", 00:21:11.663 "traddr": "10.0.0.2", 00:21:11.663 "trsvcid": "4420" 00:21:11.663 }, 00:21:11.663 "peer_address": { 00:21:11.663 "trtype": "TCP", 00:21:11.663 "adrfam": "IPv4", 00:21:11.663 "traddr": "10.0.0.1", 00:21:11.663 "trsvcid": "34448" 00:21:11.663 }, 00:21:11.663 "auth": { 00:21:11.663 "state": "completed", 00:21:11.663 "digest": "sha512", 00:21:11.663 "dhgroup": "ffdhe2048" 00:21:11.663 } 00:21:11.663 } 00:21:11.663 ]' 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.920 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.177 00:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.108 00:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.366 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.931 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.931 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 00:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.188 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.188 { 00:21:14.188 "cntlid": 109, 00:21:14.189 "qid": 0, 00:21:14.189 "state": "enabled", 00:21:14.189 "listen_address": { 00:21:14.189 "trtype": "TCP", 00:21:14.189 "adrfam": "IPv4", 00:21:14.189 "traddr": "10.0.0.2", 00:21:14.189 "trsvcid": "4420" 00:21:14.189 }, 00:21:14.189 "peer_address": { 00:21:14.189 "trtype": "TCP", 00:21:14.189 "adrfam": "IPv4", 00:21:14.189 "traddr": "10.0.0.1", 00:21:14.189 "trsvcid": "34476" 00:21:14.189 }, 00:21:14.189 "auth": { 00:21:14.189 "state": "completed", 00:21:14.189 "digest": "sha512", 00:21:14.189 "dhgroup": "ffdhe2048" 00:21:14.189 } 00:21:14.189 } 00:21:14.189 ]' 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.189 00:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.446 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.378 00:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.635 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.891 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.148 { 00:21:16.148 "cntlid": 111, 00:21:16.148 "qid": 0, 00:21:16.148 "state": "enabled", 00:21:16.148 "listen_address": { 00:21:16.148 "trtype": "TCP", 00:21:16.148 "adrfam": "IPv4", 00:21:16.148 "traddr": "10.0.0.2", 00:21:16.148 "trsvcid": "4420" 00:21:16.148 }, 00:21:16.148 "peer_address": { 00:21:16.148 "trtype": "TCP", 00:21:16.148 "adrfam": "IPv4", 00:21:16.148 "traddr": "10.0.0.1", 00:21:16.148 "trsvcid": "34518" 00:21:16.148 }, 00:21:16.148 "auth": { 00:21:16.148 "state": "completed", 00:21:16.148 "digest": "sha512", 00:21:16.148 "dhgroup": "ffdhe2048" 00:21:16.148 } 00:21:16.148 } 00:21:16.148 ]' 00:21:16.148 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.406 00:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.663 00:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.596 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.852 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.179 00:21:18.179 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.179 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.179 00:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.437 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.437 { 00:21:18.437 "cntlid": 113, 00:21:18.437 "qid": 0, 00:21:18.437 "state": "enabled", 00:21:18.437 "listen_address": { 00:21:18.437 "trtype": "TCP", 00:21:18.437 "adrfam": "IPv4", 00:21:18.437 "traddr": "10.0.0.2", 00:21:18.437 "trsvcid": "4420" 00:21:18.437 }, 00:21:18.438 "peer_address": { 00:21:18.438 "trtype": "TCP", 00:21:18.438 "adrfam": "IPv4", 00:21:18.438 "traddr": "10.0.0.1", 00:21:18.438 "trsvcid": "34538" 00:21:18.438 }, 00:21:18.438 "auth": { 00:21:18.438 "state": "completed", 00:21:18.438 "digest": "sha512", 00:21:18.438 "dhgroup": "ffdhe3072" 00:21:18.438 } 00:21:18.438 } 00:21:18.438 ]' 00:21:18.438 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.438 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.438 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.438 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.438 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.696 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.696 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.696 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.953 00:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.888 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.146 00:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.404 00:21:20.404 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.404 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.404 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.662 { 00:21:20.662 "cntlid": 115, 00:21:20.662 "qid": 0, 00:21:20.662 "state": "enabled", 00:21:20.662 "listen_address": { 00:21:20.662 "trtype": "TCP", 00:21:20.662 "adrfam": "IPv4", 00:21:20.662 "traddr": "10.0.0.2", 00:21:20.662 "trsvcid": "4420" 00:21:20.662 }, 00:21:20.662 "peer_address": { 00:21:20.662 "trtype": "TCP", 00:21:20.662 "adrfam": "IPv4", 00:21:20.662 "traddr": "10.0.0.1", 00:21:20.662 "trsvcid": "44028" 00:21:20.662 }, 00:21:20.662 "auth": { 00:21:20.662 "state": "completed", 00:21:20.662 "digest": "sha512", 00:21:20.662 "dhgroup": "ffdhe3072" 00:21:20.662 } 00:21:20.662 } 00:21:20.662 ]' 00:21:20.662 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.921 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.179 00:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.112 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.370 00:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.628 00:21:22.628 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.628 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.628 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.886 { 00:21:22.886 "cntlid": 117, 00:21:22.886 "qid": 0, 00:21:22.886 "state": "enabled", 00:21:22.886 "listen_address": { 00:21:22.886 "trtype": "TCP", 00:21:22.886 "adrfam": "IPv4", 00:21:22.886 "traddr": "10.0.0.2", 00:21:22.886 "trsvcid": "4420" 00:21:22.886 }, 00:21:22.886 "peer_address": { 00:21:22.886 "trtype": "TCP", 00:21:22.886 "adrfam": "IPv4", 00:21:22.886 "traddr": "10.0.0.1", 00:21:22.886 "trsvcid": "44062" 00:21:22.886 }, 00:21:22.886 "auth": { 00:21:22.886 "state": "completed", 00:21:22.886 "digest": "sha512", 00:21:22.886 "dhgroup": "ffdhe3072" 00:21:22.886 } 00:21:22.886 } 00:21:22.886 ]' 00:21:22.886 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.145 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.403 00:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.336 00:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.594 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.853 00:21:24.853 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.853 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.853 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.110 { 00:21:25.110 "cntlid": 119, 00:21:25.110 "qid": 0, 00:21:25.110 "state": "enabled", 00:21:25.110 "listen_address": { 00:21:25.110 "trtype": "TCP", 00:21:25.110 "adrfam": "IPv4", 00:21:25.110 "traddr": "10.0.0.2", 00:21:25.110 "trsvcid": "4420" 00:21:25.110 }, 00:21:25.110 "peer_address": { 00:21:25.110 "trtype": "TCP", 00:21:25.110 "adrfam": "IPv4", 00:21:25.110 "traddr": "10.0.0.1", 00:21:25.110 "trsvcid": "44100" 00:21:25.110 }, 00:21:25.110 "auth": { 00:21:25.110 "state": "completed", 00:21:25.110 "digest": "sha512", 00:21:25.110 "dhgroup": "ffdhe3072" 00:21:25.110 } 00:21:25.110 } 00:21:25.110 ]' 00:21:25.110 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.368 00:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.625 00:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.558 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.817 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.075 00:21:27.075 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.075 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.075 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.333 { 00:21:27.333 "cntlid": 121, 00:21:27.333 "qid": 0, 00:21:27.333 "state": "enabled", 00:21:27.333 "listen_address": { 00:21:27.333 "trtype": "TCP", 00:21:27.333 "adrfam": "IPv4", 00:21:27.333 "traddr": "10.0.0.2", 00:21:27.333 "trsvcid": "4420" 00:21:27.333 }, 00:21:27.333 "peer_address": { 00:21:27.333 "trtype": "TCP", 00:21:27.333 "adrfam": "IPv4", 00:21:27.333 "traddr": "10.0.0.1", 00:21:27.333 "trsvcid": "44124" 00:21:27.333 }, 00:21:27.333 "auth": { 00:21:27.333 "state": "completed", 00:21:27.333 "digest": "sha512", 00:21:27.333 "dhgroup": "ffdhe4096" 00:21:27.333 } 00:21:27.333 } 00:21:27.333 ]' 00:21:27.333 00:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.333 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.333 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.591 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.591 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.591 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.591 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.591 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.849 00:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:28.782 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.783 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.041 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.299 00:21:29.299 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.299 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.299 00:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.557 { 00:21:29.557 "cntlid": 123, 00:21:29.557 "qid": 0, 00:21:29.557 "state": "enabled", 00:21:29.557 "listen_address": { 00:21:29.557 "trtype": "TCP", 00:21:29.557 "adrfam": "IPv4", 00:21:29.557 "traddr": "10.0.0.2", 00:21:29.557 "trsvcid": "4420" 00:21:29.557 }, 00:21:29.557 "peer_address": { 00:21:29.557 "trtype": "TCP", 00:21:29.557 "adrfam": "IPv4", 00:21:29.557 "traddr": "10.0.0.1", 00:21:29.557 "trsvcid": "44156" 00:21:29.557 }, 00:21:29.557 "auth": { 00:21:29.557 "state": "completed", 00:21:29.557 "digest": "sha512", 00:21:29.557 "dhgroup": "ffdhe4096" 00:21:29.557 } 00:21:29.557 } 00:21:29.557 ]' 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.557 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.815 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.815 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.815 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.815 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.815 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.073 00:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.006 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.264 00:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.522 00:21:31.522 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.522 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.522 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.779 { 00:21:31.779 "cntlid": 125, 00:21:31.779 "qid": 0, 00:21:31.779 "state": "enabled", 00:21:31.779 "listen_address": { 00:21:31.779 "trtype": "TCP", 00:21:31.779 "adrfam": "IPv4", 00:21:31.779 "traddr": "10.0.0.2", 00:21:31.779 "trsvcid": "4420" 00:21:31.779 }, 00:21:31.779 "peer_address": { 00:21:31.779 "trtype": "TCP", 00:21:31.779 "adrfam": "IPv4", 00:21:31.779 "traddr": "10.0.0.1", 00:21:31.779 "trsvcid": "45926" 00:21:31.779 }, 00:21:31.779 "auth": { 00:21:31.779 "state": "completed", 00:21:31.779 "digest": "sha512", 00:21:31.779 "dhgroup": "ffdhe4096" 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ]' 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.779 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.036 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.036 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.036 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.036 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.036 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.294 00:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.227 00:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.485 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.743 00:21:33.743 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.743 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.743 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.308 { 00:21:34.308 "cntlid": 127, 00:21:34.308 "qid": 0, 00:21:34.308 "state": "enabled", 00:21:34.308 "listen_address": { 00:21:34.308 "trtype": "TCP", 00:21:34.308 "adrfam": "IPv4", 00:21:34.308 "traddr": "10.0.0.2", 00:21:34.308 "trsvcid": "4420" 00:21:34.308 }, 00:21:34.308 "peer_address": { 00:21:34.308 "trtype": "TCP", 00:21:34.308 "adrfam": "IPv4", 00:21:34.308 "traddr": "10.0.0.1", 00:21:34.308 "trsvcid": "45950" 00:21:34.308 }, 00:21:34.308 "auth": { 00:21:34.308 "state": "completed", 00:21:34.308 "digest": "sha512", 00:21:34.308 "dhgroup": "ffdhe4096" 00:21:34.308 } 00:21:34.308 } 00:21:34.308 ]' 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.308 00:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.566 00:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:35.499 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.499 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.500 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.757 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.321 00:21:36.321 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.321 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.321 00:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.579 { 00:21:36.579 "cntlid": 129, 00:21:36.579 "qid": 0, 00:21:36.579 "state": "enabled", 00:21:36.579 "listen_address": { 00:21:36.579 "trtype": "TCP", 00:21:36.579 "adrfam": "IPv4", 00:21:36.579 "traddr": "10.0.0.2", 00:21:36.579 "trsvcid": "4420" 00:21:36.579 }, 00:21:36.579 "peer_address": { 00:21:36.579 "trtype": "TCP", 00:21:36.579 "adrfam": "IPv4", 00:21:36.579 "traddr": "10.0.0.1", 00:21:36.579 "trsvcid": "45984" 00:21:36.579 }, 00:21:36.579 "auth": { 00:21:36.579 "state": "completed", 00:21:36.579 "digest": "sha512", 00:21:36.579 "dhgroup": "ffdhe6144" 00:21:36.579 } 00:21:36.579 } 00:21:36.579 ]' 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.579 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.837 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.837 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.837 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.837 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.837 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.095 00:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.030 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.287 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.288 00:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.853 00:21:38.853 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.853 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.853 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.111 { 00:21:39.111 "cntlid": 131, 00:21:39.111 "qid": 0, 00:21:39.111 "state": "enabled", 00:21:39.111 "listen_address": { 00:21:39.111 "trtype": "TCP", 00:21:39.111 "adrfam": "IPv4", 00:21:39.111 "traddr": "10.0.0.2", 00:21:39.111 "trsvcid": "4420" 00:21:39.111 }, 00:21:39.111 "peer_address": { 00:21:39.111 "trtype": "TCP", 00:21:39.111 "adrfam": "IPv4", 00:21:39.111 "traddr": "10.0.0.1", 00:21:39.111 "trsvcid": "46012" 00:21:39.111 }, 00:21:39.111 "auth": { 00:21:39.111 "state": "completed", 00:21:39.111 "digest": "sha512", 00:21:39.111 "dhgroup": "ffdhe6144" 00:21:39.111 } 00:21:39.111 } 00:21:39.111 ]' 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.111 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.369 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.369 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.369 00:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.369 00:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:40.303 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.561 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:40.561 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.562 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.562 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.562 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.562 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.562 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.820 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.386 00:21:41.386 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.386 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.386 00:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.644 { 00:21:41.644 "cntlid": 133, 00:21:41.644 "qid": 0, 00:21:41.644 "state": "enabled", 00:21:41.644 "listen_address": { 00:21:41.644 "trtype": "TCP", 00:21:41.644 "adrfam": "IPv4", 00:21:41.644 "traddr": "10.0.0.2", 00:21:41.644 "trsvcid": "4420" 00:21:41.644 }, 00:21:41.644 "peer_address": { 00:21:41.644 "trtype": "TCP", 00:21:41.644 "adrfam": "IPv4", 00:21:41.644 "traddr": "10.0.0.1", 00:21:41.644 "trsvcid": "60714" 00:21:41.644 }, 00:21:41.644 "auth": { 00:21:41.644 "state": "completed", 00:21:41.644 "digest": "sha512", 00:21:41.644 "dhgroup": "ffdhe6144" 00:21:41.644 } 00:21:41.644 } 00:21:41.644 ]' 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.644 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.902 00:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.892 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.150 00:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.715 00:21:43.715 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.715 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.715 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.973 { 00:21:43.973 "cntlid": 135, 00:21:43.973 "qid": 0, 00:21:43.973 "state": "enabled", 00:21:43.973 "listen_address": { 00:21:43.973 "trtype": "TCP", 00:21:43.973 "adrfam": "IPv4", 00:21:43.973 "traddr": "10.0.0.2", 00:21:43.973 "trsvcid": "4420" 00:21:43.973 }, 00:21:43.973 "peer_address": { 00:21:43.973 "trtype": "TCP", 00:21:43.973 "adrfam": "IPv4", 00:21:43.973 "traddr": "10.0.0.1", 00:21:43.973 "trsvcid": "60736" 00:21:43.973 }, 00:21:43.973 "auth": { 00:21:43.973 "state": "completed", 00:21:43.973 "digest": "sha512", 00:21:43.973 "dhgroup": "ffdhe6144" 00:21:43.973 } 00:21:43.973 } 00:21:43.973 ]' 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.973 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.230 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.230 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.230 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.487 00:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.418 00:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.675 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.676 00:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.607 00:21:46.607 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.607 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.608 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.864 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.864 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.864 00:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.864 00:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.865 { 00:21:46.865 "cntlid": 137, 00:21:46.865 "qid": 0, 00:21:46.865 "state": "enabled", 00:21:46.865 "listen_address": { 00:21:46.865 "trtype": "TCP", 00:21:46.865 "adrfam": "IPv4", 00:21:46.865 "traddr": "10.0.0.2", 00:21:46.865 "trsvcid": "4420" 00:21:46.865 }, 00:21:46.865 "peer_address": { 00:21:46.865 "trtype": "TCP", 00:21:46.865 "adrfam": "IPv4", 00:21:46.865 "traddr": "10.0.0.1", 00:21:46.865 "trsvcid": "60754" 00:21:46.865 }, 00:21:46.865 "auth": { 00:21:46.865 "state": "completed", 00:21:46.865 "digest": "sha512", 00:21:46.865 "dhgroup": "ffdhe8192" 00:21:46.865 } 00:21:46.865 } 00:21:46.865 ]' 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.865 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.122 00:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.053 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.311 00:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.243 00:21:49.243 00:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.243 00:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.243 00:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.501 { 00:21:49.501 "cntlid": 139, 00:21:49.501 "qid": 0, 00:21:49.501 "state": "enabled", 00:21:49.501 "listen_address": { 00:21:49.501 "trtype": "TCP", 00:21:49.501 "adrfam": "IPv4", 00:21:49.501 "traddr": "10.0.0.2", 00:21:49.501 "trsvcid": "4420" 00:21:49.501 }, 00:21:49.501 "peer_address": { 00:21:49.501 "trtype": "TCP", 00:21:49.501 "adrfam": "IPv4", 00:21:49.501 "traddr": "10.0.0.1", 00:21:49.501 "trsvcid": "60788" 00:21:49.501 }, 00:21:49.501 "auth": { 00:21:49.501 "state": "completed", 00:21:49.501 "digest": "sha512", 00:21:49.501 "dhgroup": "ffdhe8192" 00:21:49.501 } 00:21:49.501 } 00:21:49.501 ]' 00:21:49.501 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.758 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.016 00:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ODk3YjA0YjUzZjkzMzgzYTg4NGMwYzMzMGU0ODhlY2UBQzHF: --dhchap-ctrl-secret DHHC-1:02:N2VkYTkyM2Q1OTg0NTU1YzkwNjg0Y2JkZmUzOTQ3YWYzZjBlMmQ4ZjU0MDE1M2I3xxK7hA==: 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.948 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.206 00:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.139 00:21:52.139 00:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.139 00:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.139 00:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.397 { 00:21:52.397 "cntlid": 141, 00:21:52.397 "qid": 0, 00:21:52.397 "state": "enabled", 00:21:52.397 "listen_address": { 00:21:52.397 "trtype": "TCP", 00:21:52.397 "adrfam": "IPv4", 00:21:52.397 "traddr": "10.0.0.2", 00:21:52.397 "trsvcid": "4420" 00:21:52.397 }, 00:21:52.397 "peer_address": { 00:21:52.397 "trtype": "TCP", 00:21:52.397 "adrfam": "IPv4", 00:21:52.397 "traddr": "10.0.0.1", 00:21:52.397 "trsvcid": "37328" 00:21:52.397 }, 00:21:52.397 "auth": { 00:21:52.397 "state": "completed", 00:21:52.397 "digest": "sha512", 00:21:52.397 "dhgroup": "ffdhe8192" 00:21:52.397 } 00:21:52.397 } 00:21:52.397 ]' 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.397 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.655 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.655 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.655 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.912 00:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZjA4NGQ5OTRlZDJlM2MzOWNhMDM3NTNiY2MyMWY1YWM5Y2VmNTk0MzJkMmU2MjQyfT1xRg==: --dhchap-ctrl-secret DHHC-1:01:MWI4YjcwMzhjOWZiY2QwODY1OGI0YWQ5ODcyNjQ1OTbfgqp7: 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.844 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.102 00:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.036 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.036 { 00:21:55.036 "cntlid": 143, 00:21:55.036 "qid": 0, 00:21:55.036 "state": "enabled", 00:21:55.036 "listen_address": { 00:21:55.036 "trtype": "TCP", 00:21:55.036 "adrfam": "IPv4", 00:21:55.036 "traddr": "10.0.0.2", 00:21:55.036 "trsvcid": "4420" 00:21:55.036 }, 00:21:55.036 "peer_address": { 00:21:55.036 "trtype": "TCP", 00:21:55.036 "adrfam": "IPv4", 00:21:55.036 "traddr": "10.0.0.1", 00:21:55.036 "trsvcid": "37348" 00:21:55.036 }, 00:21:55.036 "auth": { 00:21:55.036 "state": "completed", 00:21:55.036 "digest": "sha512", 00:21:55.036 "dhgroup": "ffdhe8192" 00:21:55.036 } 00:21:55.036 } 00:21:55.036 ]' 00:21:55.036 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.294 00:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.552 00:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.486 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.744 00:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.676 00:21:57.676 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.676 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.676 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.933 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.934 { 00:21:57.934 "cntlid": 145, 00:21:57.934 "qid": 0, 00:21:57.934 "state": "enabled", 00:21:57.934 "listen_address": { 00:21:57.934 "trtype": "TCP", 00:21:57.934 "adrfam": "IPv4", 00:21:57.934 "traddr": "10.0.0.2", 00:21:57.934 "trsvcid": "4420" 00:21:57.934 }, 00:21:57.934 "peer_address": { 00:21:57.934 "trtype": "TCP", 00:21:57.934 "adrfam": "IPv4", 00:21:57.934 "traddr": "10.0.0.1", 00:21:57.934 "trsvcid": "37376" 00:21:57.934 }, 00:21:57.934 "auth": { 00:21:57.934 "state": "completed", 00:21:57.934 "digest": "sha512", 00:21:57.934 "dhgroup": "ffdhe8192" 00:21:57.934 } 00:21:57.934 } 00:21:57.934 ]' 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.934 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.191 00:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2I3MmYxNjdjMmI0NGQ5NDc4NjEwNGEyYmQ4OWY4MWUwMTNlMzM2MGVlZmQyY2U4poQBdw==: --dhchap-ctrl-secret DHHC-1:03:M2QyNzZlZGI2YTYyNTE2ZWU5ZGIyZDg5MjU3ZDUyYzM3NGRlMzk5MGEyZjAyNjNmZmNmZWIwOGY1YjNiNDQ5N0JiTDo=: 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.124 00:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.119 request: 00:22:00.119 { 00:22:00.119 "name": "nvme0", 00:22:00.119 "trtype": "tcp", 00:22:00.119 "traddr": "10.0.0.2", 00:22:00.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:00.119 "adrfam": "ipv4", 00:22:00.119 "trsvcid": "4420", 00:22:00.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.119 "dhchap_key": "key2", 00:22:00.119 "method": "bdev_nvme_attach_controller", 00:22:00.119 "req_id": 1 00:22:00.119 } 00:22:00.119 Got JSON-RPC error response 00:22:00.119 response: 00:22:00.119 { 00:22:00.119 "code": -5, 00:22:00.119 "message": "Input/output error" 00:22:00.119 } 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.119 00:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.051 request: 00:22:01.051 { 00:22:01.051 "name": "nvme0", 00:22:01.051 "trtype": "tcp", 00:22:01.051 "traddr": "10.0.0.2", 00:22:01.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:01.051 "adrfam": "ipv4", 00:22:01.051 "trsvcid": "4420", 00:22:01.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.051 "dhchap_key": "key1", 00:22:01.051 "dhchap_ctrlr_key": "ckey2", 00:22:01.051 "method": "bdev_nvme_attach_controller", 00:22:01.051 "req_id": 1 00:22:01.051 } 00:22:01.051 Got JSON-RPC error response 00:22:01.051 response: 00:22:01.051 { 00:22:01.051 "code": -5, 00:22:01.051 "message": "Input/output error" 00:22:01.051 } 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.051 00:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.984 request: 00:22:01.984 { 00:22:01.984 "name": "nvme0", 00:22:01.984 "trtype": "tcp", 00:22:01.985 "traddr": "10.0.0.2", 00:22:01.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:01.985 "adrfam": "ipv4", 00:22:01.985 "trsvcid": "4420", 00:22:01.985 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.985 "dhchap_key": "key1", 00:22:01.985 "dhchap_ctrlr_key": "ckey1", 00:22:01.985 "method": "bdev_nvme_attach_controller", 00:22:01.985 "req_id": 1 00:22:01.985 } 00:22:01.985 Got JSON-RPC error response 00:22:01.985 response: 00:22:01.985 { 00:22:01.985 "code": -5, 00:22:01.985 "message": "Input/output error" 00:22:01.985 } 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 794546 ']' 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 794546' 00:22:01.985 killing process with pid 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 794546 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=817176 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 817176 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 817176 ']' 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:01.985 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 817176 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 817176 ']' 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:02.243 00:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.502 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:02.502 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:02.502 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:02.502 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.502 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.760 00:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.694 00:22:03.694 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.694 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.694 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.953 { 00:22:03.953 "cntlid": 1, 00:22:03.953 "qid": 0, 00:22:03.953 "state": "enabled", 00:22:03.953 "listen_address": { 00:22:03.953 "trtype": "TCP", 00:22:03.953 "adrfam": "IPv4", 00:22:03.953 "traddr": "10.0.0.2", 00:22:03.953 "trsvcid": "4420" 00:22:03.953 }, 00:22:03.953 "peer_address": { 00:22:03.953 "trtype": "TCP", 00:22:03.953 "adrfam": "IPv4", 00:22:03.953 "traddr": "10.0.0.1", 00:22:03.953 "trsvcid": "47880" 00:22:03.953 }, 00:22:03.953 "auth": { 00:22:03.953 "state": "completed", 00:22:03.953 "digest": "sha512", 00:22:03.953 "dhgroup": "ffdhe8192" 00:22:03.953 } 00:22:03.953 } 00:22:03.953 ]' 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.953 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.211 00:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTlhZWQyZjE5MDRhYjVhZDU1NjQ3YmY3MDJkMzFiZDRkYjljNjI4MjE1YmM4NTc1N2U1Mjk5MGNkODZiZDdlNaHQoFw=: 00:22:05.144 00:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.144 00:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:05.144 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.144 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:05.402 00:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.660 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.660 request: 00:22:05.660 { 00:22:05.660 "name": "nvme0", 00:22:05.660 "trtype": "tcp", 00:22:05.660 "traddr": "10.0.0.2", 00:22:05.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:05.660 "adrfam": "ipv4", 00:22:05.660 "trsvcid": "4420", 00:22:05.660 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.660 "dhchap_key": "key3", 00:22:05.660 "method": "bdev_nvme_attach_controller", 00:22:05.660 "req_id": 1 00:22:05.660 } 00:22:05.660 Got JSON-RPC error response 00:22:05.660 response: 00:22:05.660 { 00:22:05.660 "code": -5, 00:22:05.660 "message": "Input/output error" 00:22:05.660 } 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.918 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.176 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.176 request: 00:22:06.176 { 00:22:06.176 "name": "nvme0", 00:22:06.176 "trtype": "tcp", 00:22:06.176 "traddr": "10.0.0.2", 00:22:06.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:06.176 "adrfam": "ipv4", 00:22:06.176 "trsvcid": "4420", 00:22:06.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.176 "dhchap_key": "key3", 00:22:06.176 "method": "bdev_nvme_attach_controller", 00:22:06.176 "req_id": 1 00:22:06.176 } 00:22:06.176 Got JSON-RPC error response 00:22:06.176 response: 00:22:06.176 { 00:22:06.176 "code": -5, 00:22:06.176 "message": "Input/output error" 00:22:06.176 } 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:06.434 00:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:06.434 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.434 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.434 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.692 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.950 request: 00:22:06.950 { 00:22:06.950 "name": "nvme0", 00:22:06.950 "trtype": "tcp", 00:22:06.950 "traddr": "10.0.0.2", 00:22:06.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:06.950 "adrfam": "ipv4", 00:22:06.950 "trsvcid": "4420", 00:22:06.950 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.950 "dhchap_key": "key0", 00:22:06.950 "dhchap_ctrlr_key": "key1", 00:22:06.950 "method": "bdev_nvme_attach_controller", 00:22:06.950 "req_id": 1 00:22:06.950 } 00:22:06.950 Got JSON-RPC error response 00:22:06.950 response: 00:22:06.950 { 00:22:06.950 "code": -5, 00:22:06.950 "message": "Input/output error" 00:22:06.950 } 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:06.950 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:07.209 00:22:07.209 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:07.209 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:07.209 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.467 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.467 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.467 00:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 794680 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 794680 ']' 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 794680 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 794680 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 794680' 00:22:07.725 killing process with pid 794680 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 794680 00:22:07.725 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 794680 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.983 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.983 rmmod nvme_tcp 00:22:08.241 rmmod nvme_fabrics 00:22:08.241 rmmod nvme_keyring 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 817176 ']' 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 817176 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 817176 ']' 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 817176 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 817176 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 817176' 00:22:08.241 killing process with pid 817176 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 817176 00:22:08.241 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 817176 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.499 00:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.401 00:02:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.401 00:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RK8 /tmp/spdk.key-sha256.cFX /tmp/spdk.key-sha384.NFj /tmp/spdk.key-sha512.diq /tmp/spdk.key-sha512.Rzk /tmp/spdk.key-sha384.jt5 /tmp/spdk.key-sha256.9EN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:10.401 00:22:10.401 real 3m9.181s 00:22:10.401 user 7m20.025s 00:22:10.401 sys 0m24.802s 00:22:10.401 00:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:10.401 00:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.401 ************************************ 00:22:10.401 END TEST nvmf_auth_target 00:22:10.401 ************************************ 00:22:10.401 00:02:16 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:10.401 00:02:16 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:10.401 00:02:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:10.401 00:02:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:10.401 00:02:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:10.401 ************************************ 00:22:10.401 START TEST nvmf_bdevio_no_huge 00:22:10.401 ************************************ 00:22:10.401 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:10.660 * Looking for test storage... 00:22:10.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.660 00:02:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.559 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.559 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.559 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.559 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.559 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:12.560 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:12.560 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:12.560 Found net devices under 0000:09:00.0: cvl_0_0 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:12.560 Found net devices under 0000:09:00.1: cvl_0_1 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:12.560 00:22:12.560 --- 10.0.0.2 ping statistics --- 00:22:12.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.560 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:12.560 00:22:12.560 --- 10.0.0.1 ping statistics --- 00:22:12.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.560 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=819824 00:22:12.560 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 819824 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 819824 ']' 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.561 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.820 [2024-07-25 00:02:18.303875] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:12.820 [2024-07-25 00:02:18.303944] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:12.820 [2024-07-25 00:02:18.373601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.820 [2024-07-25 00:02:18.455972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.820 [2024-07-25 00:02:18.456025] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.820 [2024-07-25 00:02:18.456038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.820 [2024-07-25 00:02:18.456049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.820 [2024-07-25 00:02:18.456059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.820 [2024-07-25 00:02:18.456151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.820 [2024-07-25 00:02:18.456238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:12.820 [2024-07-25 00:02:18.456304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:12.820 [2024-07-25 00:02:18.456307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.078 [2024-07-25 00:02:18.577185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.078 Malloc0 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.078 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.079 [2024-07-25 00:02:18.614901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:13.079 { 00:22:13.079 "params": { 00:22:13.079 "name": "Nvme$subsystem", 00:22:13.079 "trtype": "$TEST_TRANSPORT", 00:22:13.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.079 "adrfam": "ipv4", 00:22:13.079 "trsvcid": "$NVMF_PORT", 00:22:13.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.079 "hdgst": ${hdgst:-false}, 00:22:13.079 "ddgst": ${ddgst:-false} 00:22:13.079 }, 00:22:13.079 "method": "bdev_nvme_attach_controller" 00:22:13.079 } 00:22:13.079 EOF 00:22:13.079 )") 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:13.079 00:02:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:13.079 "params": { 00:22:13.079 "name": "Nvme1", 00:22:13.079 "trtype": "tcp", 00:22:13.079 "traddr": "10.0.0.2", 00:22:13.079 "adrfam": "ipv4", 00:22:13.079 "trsvcid": "4420", 00:22:13.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.079 "hdgst": false, 00:22:13.079 "ddgst": false 00:22:13.079 }, 00:22:13.079 "method": "bdev_nvme_attach_controller" 00:22:13.079 }' 00:22:13.079 [2024-07-25 00:02:18.660052] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:13.079 [2024-07-25 00:02:18.660166] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid819968 ] 00:22:13.079 [2024-07-25 00:02:18.718921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:13.337 [2024-07-25 00:02:18.807380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.337 [2024-07-25 00:02:18.807431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.337 [2024-07-25 00:02:18.807434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.594 I/O targets: 00:22:13.594 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:13.594 00:22:13.594 00:22:13.594 CUnit - A unit testing framework for C - Version 2.1-3 00:22:13.594 http://cunit.sourceforge.net/ 00:22:13.594 00:22:13.594 00:22:13.594 Suite: bdevio tests on: Nvme1n1 00:22:13.594 Test: blockdev write read block ...passed 00:22:13.594 Test: blockdev write zeroes read block ...passed 00:22:13.594 Test: blockdev write zeroes read no split ...passed 00:22:13.594 Test: blockdev write zeroes read split ...passed 00:22:13.851 Test: blockdev write zeroes read split partial ...passed 00:22:13.851 Test: blockdev reset ...[2024-07-25 00:02:19.334538] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.851 [2024-07-25 00:02:19.334645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57a2a0 (9): Bad file descriptor 00:22:13.851 [2024-07-25 00:02:19.352477] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:13.851 passed 00:22:13.851 Test: blockdev write read 8 blocks ...passed 00:22:13.851 Test: blockdev write read size > 128k ...passed 00:22:13.851 Test: blockdev write read invalid size ...passed 00:22:13.851 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:13.851 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:13.851 Test: blockdev write read max offset ...passed 00:22:13.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:13.851 Test: blockdev writev readv 8 blocks ...passed 00:22:13.851 Test: blockdev writev readv 30 x 1block ...passed 00:22:13.851 Test: blockdev writev readv block ...passed 00:22:14.108 Test: blockdev writev readv size > 128k ...passed 00:22:14.108 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:14.108 Test: blockdev comparev and writev ...[2024-07-25 00:02:19.570999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.571035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.571059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.571076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.571486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.571512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.571534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.571550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.571944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.571970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.571992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.572008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.572407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.572432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.108 [2024-07-25 00:02:19.572454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.108 [2024-07-25 00:02:19.572469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.108 passed 00:22:14.108 Test: blockdev nvme passthru rw ...passed 00:22:14.109 Test: blockdev nvme passthru vendor specific ...[2024-07-25 00:02:19.656488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.109 [2024-07-25 00:02:19.656515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.109 [2024-07-25 00:02:19.656727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.109 [2024-07-25 00:02:19.656752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.109 [2024-07-25 00:02:19.656956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.109 [2024-07-25 00:02:19.656980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.109 [2024-07-25 00:02:19.657197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.109 [2024-07-25 00:02:19.657222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.109 passed 00:22:14.109 Test: blockdev nvme admin passthru ...passed 00:22:14.109 Test: blockdev copy ...passed 00:22:14.109 00:22:14.109 Run Summary: Type Total Ran Passed Failed Inactive 00:22:14.109 suites 1 1 n/a 0 0 00:22:14.109 tests 23 23 23 0 0 00:22:14.109 asserts 152 152 152 0 n/a 00:22:14.109 00:22:14.109 Elapsed time = 1.191 seconds 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.366 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.366 rmmod nvme_tcp 00:22:14.366 rmmod nvme_fabrics 00:22:14.624 rmmod nvme_keyring 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 819824 ']' 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 819824 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 819824 ']' 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 819824 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 819824 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 819824' 00:22:14.624 killing process with pid 819824 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 819824 00:22:14.624 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 819824 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.882 00:02:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.414 00:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.414 00:22:17.415 real 0m6.484s 00:22:17.415 user 0m11.054s 00:22:17.415 sys 0m2.454s 00:22:17.415 00:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:17.415 00:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.415 ************************************ 00:22:17.415 END TEST nvmf_bdevio_no_huge 00:22:17.415 ************************************ 00:22:17.415 00:02:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:17.415 00:02:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:17.415 00:02:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:17.415 00:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.415 ************************************ 00:22:17.415 START TEST nvmf_tls 00:22:17.415 ************************************ 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:17.415 * Looking for test storage... 00:22:17.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.415 00:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.314 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:19.315 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:19.315 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:19.315 Found net devices under 0000:09:00.0: cvl_0_0 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:19.315 Found net devices under 0000:09:00.1: cvl_0_1 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:19.315 00:22:19.315 --- 10.0.0.2 ping statistics --- 00:22:19.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.315 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:19.315 00:22:19.315 --- 10.0.0.1 ping statistics --- 00:22:19.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.315 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=822035 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 822035 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 822035 ']' 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.315 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.315 [2024-07-25 00:02:24.733267] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:19.315 [2024-07-25 00:02:24.733350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.315 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.316 [2024-07-25 00:02:24.796662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.316 [2024-07-25 00:02:24.880957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.316 [2024-07-25 00:02:24.881020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.316 [2024-07-25 00:02:24.881049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.316 [2024-07-25 00:02:24.881060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.316 [2024-07-25 00:02:24.881069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.316 [2024-07-25 00:02:24.881113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:19.316 00:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:19.574 true 00:22:19.574 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:19.574 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.832 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:19.832 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:19.832 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:20.091 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.091 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:20.351 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:20.351 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:20.351 00:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:20.639 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.639 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:20.897 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:20.897 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:20.897 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.897 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:21.155 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:21.155 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:21.155 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:21.413 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.413 00:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:21.671 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:21.671 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:21.671 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:21.929 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.929 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.9uqP2rmPRL 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.j5DOv4pMqD 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.9uqP2rmPRL 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.j5DOv4pMqD 00:22:22.187 00:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.445 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:22.702 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.9uqP2rmPRL 00:22:22.702 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9uqP2rmPRL 00:22:22.702 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.960 [2024-07-25 00:02:28.638953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.960 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.218 00:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.476 [2024-07-25 00:02:29.152302] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.476 [2024-07-25 00:02:29.152593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.476 00:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.734 malloc0 00:22:23.734 00:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.992 00:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9uqP2rmPRL 00:22:24.250 [2024-07-25 00:02:29.902734] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:24.250 00:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9uqP2rmPRL 00:22:24.250 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.453 Initializing NVMe Controllers 00:22:36.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.453 Initialization complete. Launching workers. 00:22:36.453 ======================================================== 00:22:36.453 Latency(us) 00:22:36.453 Device Information : IOPS MiB/s Average min max 00:22:36.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7885.56 30.80 8119.42 1108.81 10480.29 00:22:36.453 ======================================================== 00:22:36.453 Total : 7885.56 30.80 8119.42 1108.81 10480.29 00:22:36.453 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9uqP2rmPRL 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9uqP2rmPRL' 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=823844 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 823844 /var/tmp/bdevperf.sock 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 823844 ']' 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.453 [2024-07-25 00:02:40.069153] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:36.453 [2024-07-25 00:02:40.069240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823844 ] 00:22:36.453 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.453 [2024-07-25 00:02:40.127203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.453 [2024-07-25 00:02:40.211911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:36.453 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9uqP2rmPRL 00:22:36.454 [2024-07-25 00:02:40.553900] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.454 [2024-07-25 00:02:40.554016] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.454 TLSTESTn1 00:22:36.454 00:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:36.454 Running I/O for 10 seconds... 00:22:46.413 00:22:46.413 Latency(us) 00:22:46.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.413 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:46.413 Verification LBA range: start 0x0 length 0x2000 00:22:46.413 TLSTESTn1 : 10.05 2509.40 9.80 0.00 0.00 50870.81 7767.23 79225.74 00:22:46.413 =================================================================================================================== 00:22:46.413 Total : 2509.40 9.80 0.00 0.00 50870.81 7767.23 79225.74 00:22:46.413 0 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 823844 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 823844 ']' 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 823844 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 823844 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 823844' 00:22:46.413 killing process with pid 823844 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 823844 00:22:46.413 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.413 00:22:46.413 Latency(us) 00:22:46.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.413 =================================================================================================================== 00:22:46.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.413 [2024-07-25 00:02:50.853486] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:46.413 00:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 823844 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j5DOv4pMqD 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j5DOv4pMqD 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j5DOv4pMqD 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.j5DOv4pMqD' 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=825116 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 825116 /var/tmp/bdevperf.sock 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825116 ']' 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.413 [2024-07-25 00:02:51.126620] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:46.413 [2024-07-25 00:02:51.126701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825116 ] 00:22:46.413 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.413 [2024-07-25 00:02:51.184479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.413 [2024-07-25 00:02:51.267657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:46.413 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j5DOv4pMqD 00:22:46.413 [2024-07-25 00:02:51.621418] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.413 [2024-07-25 00:02:51.621541] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.413 [2024-07-25 00:02:51.631422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:46.413 [2024-07-25 00:02:51.632341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13da840 (107): Transport endpoint is not connected 00:22:46.413 [2024-07-25 00:02:51.633331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13da840 (9): Bad file descriptor 00:22:46.413 [2024-07-25 00:02:51.634330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.414 [2024-07-25 00:02:51.634350] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:46.414 [2024-07-25 00:02:51.634368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.414 request: 00:22:46.414 { 00:22:46.414 "name": "TLSTEST", 00:22:46.414 "trtype": "tcp", 00:22:46.414 "traddr": "10.0.0.2", 00:22:46.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.414 "adrfam": "ipv4", 00:22:46.414 "trsvcid": "4420", 00:22:46.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.414 "psk": "/tmp/tmp.j5DOv4pMqD", 00:22:46.414 "method": "bdev_nvme_attach_controller", 00:22:46.414 "req_id": 1 00:22:46.414 } 00:22:46.414 Got JSON-RPC error response 00:22:46.414 response: 00:22:46.414 { 00:22:46.414 "code": -5, 00:22:46.414 "message": "Input/output error" 00:22:46.414 } 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 825116 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825116 ']' 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825116 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825116 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825116' 00:22:46.414 killing process with pid 825116 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825116 00:22:46.414 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.414 00:22:46.414 Latency(us) 00:22:46.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.414 =================================================================================================================== 00:22:46.414 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.414 [2024-07-25 00:02:51.677471] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825116 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9uqP2rmPRL 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9uqP2rmPRL 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9uqP2rmPRL 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9uqP2rmPRL' 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=825250 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 825250 /var/tmp/bdevperf.sock 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825250 ']' 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:46.414 00:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.414 [2024-07-25 00:02:51.914043] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:46.414 [2024-07-25 00:02:51.914133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825250 ] 00:22:46.414 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.414 [2024-07-25 00:02:51.972144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.414 [2024-07-25 00:02:52.058834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.672 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:46.672 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:46.672 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.9uqP2rmPRL 00:22:46.672 [2024-07-25 00:02:52.385163] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.672 [2024-07-25 00:02:52.385302] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.931 [2024-07-25 00:02:52.393264] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:46.931 [2024-07-25 00:02:52.393293] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:46.931 [2024-07-25 00:02:52.393354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:46.931 [2024-07-25 00:02:52.394247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b0840 (107): Transport endpoint is not connected 00:22:46.931 [2024-07-25 00:02:52.395236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b0840 (9): Bad file descriptor 00:22:46.931 [2024-07-25 00:02:52.396234] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.931 [2024-07-25 00:02:52.396254] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:46.931 [2024-07-25 00:02:52.396287] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.931 request: 00:22:46.931 { 00:22:46.931 "name": "TLSTEST", 00:22:46.931 "trtype": "tcp", 00:22:46.931 "traddr": "10.0.0.2", 00:22:46.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.931 "adrfam": "ipv4", 00:22:46.931 "trsvcid": "4420", 00:22:46.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.931 "psk": "/tmp/tmp.9uqP2rmPRL", 00:22:46.931 "method": "bdev_nvme_attach_controller", 00:22:46.931 "req_id": 1 00:22:46.931 } 00:22:46.931 Got JSON-RPC error response 00:22:46.931 response: 00:22:46.931 { 00:22:46.931 "code": -5, 00:22:46.931 "message": "Input/output error" 00:22:46.931 } 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 825250 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825250 ']' 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825250 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825250 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825250' 00:22:46.931 killing process with pid 825250 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825250 00:22:46.931 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.931 00:22:46.931 Latency(us) 00:22:46.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.931 =================================================================================================================== 00:22:46.931 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.931 [2024-07-25 00:02:52.445363] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:46.931 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825250 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9uqP2rmPRL 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9uqP2rmPRL 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.189 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9uqP2rmPRL 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9uqP2rmPRL' 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=825387 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 825387 /var/tmp/bdevperf.sock 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825387 ']' 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.190 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.190 [2024-07-25 00:02:52.702224] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:47.190 [2024-07-25 00:02:52.702308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825387 ] 00:22:47.190 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.190 [2024-07-25 00:02:52.759072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.190 [2024-07-25 00:02:52.841929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.448 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.448 00:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:47.448 00:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9uqP2rmPRL 00:22:47.706 [2024-07-25 00:02:53.178623] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.706 [2024-07-25 00:02:53.178747] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.706 [2024-07-25 00:02:53.185382] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:47.706 [2024-07-25 00:02:53.185437] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:47.706 [2024-07-25 00:02:53.185487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:47.706 [2024-07-25 00:02:53.185636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2840 (107): Transport endpoint is not connected 00:22:47.706 [2024-07-25 00:02:53.186637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2840 (9): Bad file descriptor 00:22:47.706 [2024-07-25 00:02:53.187636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:47.706 [2024-07-25 00:02:53.187660] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:47.706 [2024-07-25 00:02:53.187695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:47.706 request: 00:22:47.706 { 00:22:47.706 "name": "TLSTEST", 00:22:47.706 "trtype": "tcp", 00:22:47.706 "traddr": "10.0.0.2", 00:22:47.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.706 "adrfam": "ipv4", 00:22:47.706 "trsvcid": "4420", 00:22:47.706 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:47.706 "psk": "/tmp/tmp.9uqP2rmPRL", 00:22:47.706 "method": "bdev_nvme_attach_controller", 00:22:47.706 "req_id": 1 00:22:47.706 } 00:22:47.706 Got JSON-RPC error response 00:22:47.706 response: 00:22:47.706 { 00:22:47.706 "code": -5, 00:22:47.706 "message": "Input/output error" 00:22:47.706 } 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 825387 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825387 ']' 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825387 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825387 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825387' 00:22:47.706 killing process with pid 825387 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825387 00:22:47.706 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.706 00:22:47.706 Latency(us) 00:22:47.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.706 =================================================================================================================== 00:22:47.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.706 [2024-07-25 00:02:53.231908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.706 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825387 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=825407 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 825407 /var/tmp/bdevperf.sock 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825407 ']' 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.963 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.963 [2024-07-25 00:02:53.469562] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:47.963 [2024-07-25 00:02:53.469642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825407 ] 00:22:47.963 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.963 [2024-07-25 00:02:53.528550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.963 [2024-07-25 00:02:53.615633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.221 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.221 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.221 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:48.479 [2024-07-25 00:02:53.967476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.479 [2024-07-25 00:02:53.969358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b79f10 (9): Bad file descriptor 00:22:48.479 [2024-07-25 00:02:53.970353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.479 [2024-07-25 00:02:53.970374] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.479 [2024-07-25 00:02:53.970406] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.479 request: 00:22:48.479 { 00:22:48.479 "name": "TLSTEST", 00:22:48.479 "trtype": "tcp", 00:22:48.479 "traddr": "10.0.0.2", 00:22:48.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.479 "adrfam": "ipv4", 00:22:48.479 "trsvcid": "4420", 00:22:48.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.479 "method": "bdev_nvme_attach_controller", 00:22:48.479 "req_id": 1 00:22:48.479 } 00:22:48.479 Got JSON-RPC error response 00:22:48.479 response: 00:22:48.479 { 00:22:48.479 "code": -5, 00:22:48.479 "message": "Input/output error" 00:22:48.479 } 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 825407 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825407 ']' 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825407 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.479 00:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825407 00:22:48.479 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.479 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.479 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825407' 00:22:48.479 killing process with pid 825407 00:22:48.479 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825407 00:22:48.479 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.479 00:22:48.479 Latency(us) 00:22:48.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.479 =================================================================================================================== 00:22:48.479 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.479 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825407 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 822035 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 822035 ']' 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 822035 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 822035 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 822035' 00:22:48.738 killing process with pid 822035 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 822035 00:22:48.738 [2024-07-25 00:02:54.266395] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:48.738 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 822035 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.RLP3HbYrEz 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.RLP3HbYrEz 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=825559 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 825559 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825559 ']' 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.996 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.996 [2024-07-25 00:02:54.616211] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:48.996 [2024-07-25 00:02:54.616300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.996 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.996 [2024-07-25 00:02:54.680599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.255 [2024-07-25 00:02:54.769854] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.255 [2024-07-25 00:02:54.769918] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.255 [2024-07-25 00:02:54.769932] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.255 [2024-07-25 00:02:54.769943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.255 [2024-07-25 00:02:54.769952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.255 [2024-07-25 00:02:54.769992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RLP3HbYrEz 00:22:49.255 00:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:49.513 [2024-07-25 00:02:55.166534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.513 00:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:49.770 00:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.028 [2024-07-25 00:02:55.651851] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.028 [2024-07-25 00:02:55.652114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.028 00:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.287 malloc0 00:22:50.287 00:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.545 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:22:50.803 [2024-07-25 00:02:56.393791] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RLP3HbYrEz 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RLP3HbYrEz' 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=825832 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 825832 /var/tmp/bdevperf.sock 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 825832 ']' 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.803 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.803 [2024-07-25 00:02:56.449180] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:50.803 [2024-07-25 00:02:56.449262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825832 ] 00:22:50.803 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.803 [2024-07-25 00:02:56.505577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.061 [2024-07-25 00:02:56.593229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.061 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.061 00:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.061 00:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:22:51.319 [2024-07-25 00:02:56.918960] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.319 [2024-07-25 00:02:56.919078] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.319 TLSTESTn1 00:22:51.319 00:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:51.577 Running I/O for 10 seconds... 00:23:01.625 00:23:01.625 Latency(us) 00:23:01.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.625 Verification LBA range: start 0x0 length 0x2000 00:23:01.625 TLSTESTn1 : 10.05 2458.95 9.61 0.00 0.00 51911.89 7087.60 82332.63 00:23:01.625 =================================================================================================================== 00:23:01.625 Total : 2458.95 9.61 0.00 0.00 51911.89 7087.60 82332.63 00:23:01.625 0 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 825832 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825832 ']' 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825832 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825832 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825832' 00:23:01.625 killing process with pid 825832 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825832 00:23:01.625 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.625 00:23:01.625 Latency(us) 00:23:01.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.625 =================================================================================================================== 00:23:01.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.625 [2024-07-25 00:03:07.241599] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:01.625 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825832 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.RLP3HbYrEz 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RLP3HbYrEz 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RLP3HbYrEz 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RLP3HbYrEz 00:23:01.883 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RLP3HbYrEz' 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=827265 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 827265 /var/tmp/bdevperf.sock 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 827265 ']' 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.884 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.884 [2024-07-25 00:03:07.501260] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:01.884 [2024-07-25 00:03:07.501338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827265 ] 00:23:01.884 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.884 [2024-07-25 00:03:07.561315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.142 [2024-07-25 00:03:07.657835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.142 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.142 00:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:02.142 00:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:23:02.401 [2024-07-25 00:03:07.998216] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.401 [2024-07-25 00:03:07.998286] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:02.401 [2024-07-25 00:03:07.998301] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.RLP3HbYrEz 00:23:02.401 request: 00:23:02.401 { 00:23:02.401 "name": "TLSTEST", 00:23:02.401 "trtype": "tcp", 00:23:02.401 "traddr": "10.0.0.2", 00:23:02.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.401 "adrfam": "ipv4", 00:23:02.401 "trsvcid": "4420", 00:23:02.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.401 "psk": "/tmp/tmp.RLP3HbYrEz", 00:23:02.401 "method": "bdev_nvme_attach_controller", 00:23:02.401 "req_id": 1 00:23:02.401 } 00:23:02.401 Got JSON-RPC error response 00:23:02.401 response: 00:23:02.401 { 00:23:02.401 "code": -1, 00:23:02.401 "message": "Operation not permitted" 00:23:02.401 } 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 827265 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 827265 ']' 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 827265 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 827265 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 827265' 00:23:02.401 killing process with pid 827265 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 827265 00:23:02.401 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.401 00:23:02.401 Latency(us) 00:23:02.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.401 =================================================================================================================== 00:23:02.401 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.401 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 827265 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 825559 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 825559 ']' 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 825559 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825559 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825559' 00:23:02.659 killing process with pid 825559 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 825559 00:23:02.659 [2024-07-25 00:03:08.278643] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:02.659 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 825559 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=827765 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 827765 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 827765 ']' 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.917 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 [2024-07-25 00:03:08.564653] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:02.917 [2024-07-25 00:03:08.564750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.917 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.917 [2024-07-25 00:03:08.633354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.176 [2024-07-25 00:03:08.725678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.176 [2024-07-25 00:03:08.725751] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.176 [2024-07-25 00:03:08.725767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.176 [2024-07-25 00:03:08.725780] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.176 [2024-07-25 00:03:08.725792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.176 [2024-07-25 00:03:08.725823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RLP3HbYrEz 00:23:03.176 00:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.433 [2024-07-25 00:03:09.109645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.434 00:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.691 00:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.948 [2024-07-25 00:03:09.663250] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.948 [2024-07-25 00:03:09.663538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.205 00:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.205 malloc0 00:23:04.462 00:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:23:04.719 [2024-07-25 00:03:10.412796] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:04.719 [2024-07-25 00:03:10.412843] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:04.719 [2024-07-25 00:03:10.412879] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:04.719 request: 00:23:04.719 { 00:23:04.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.719 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.719 "psk": "/tmp/tmp.RLP3HbYrEz", 00:23:04.719 "method": "nvmf_subsystem_add_host", 00:23:04.719 "req_id": 1 00:23:04.719 } 00:23:04.719 Got JSON-RPC error response 00:23:04.719 response: 00:23:04.719 { 00:23:04.719 "code": -32603, 00:23:04.719 "message": "Internal error" 00:23:04.719 } 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 827765 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 827765 ']' 00:23:04.719 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 827765 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 827765 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 827765' 00:23:04.977 killing process with pid 827765 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 827765 00:23:04.977 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 827765 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.RLP3HbYrEz 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=828154 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 828154 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 828154 ']' 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.235 00:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.235 [2024-07-25 00:03:10.763938] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:05.235 [2024-07-25 00:03:10.764029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.235 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.235 [2024-07-25 00:03:10.831989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.235 [2024-07-25 00:03:10.924302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.235 [2024-07-25 00:03:10.924355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.235 [2024-07-25 00:03:10.924371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.235 [2024-07-25 00:03:10.924393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.235 [2024-07-25 00:03:10.924405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.235 [2024-07-25 00:03:10.924435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RLP3HbYrEz 00:23:05.492 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:05.750 [2024-07-25 00:03:11.273948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.750 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.007 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.264 [2024-07-25 00:03:11.739175] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.264 [2024-07-25 00:03:11.739403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.264 00:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.522 malloc0 00:23:06.522 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:23:06.779 [2024-07-25 00:03:12.472365] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=828367 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 828367 /var/tmp/bdevperf.sock 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 828367 ']' 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.779 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.036 [2024-07-25 00:03:12.531950] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:07.037 [2024-07-25 00:03:12.532018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828367 ] 00:23:07.037 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.037 [2024-07-25 00:03:12.587828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.037 [2024-07-25 00:03:12.674281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.294 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.294 00:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.294 00:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:23:07.294 [2024-07-25 00:03:13.006063] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.294 [2024-07-25 00:03:13.006208] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.551 TLSTESTn1 00:23:07.551 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:07.809 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:07.809 "subsystems": [ 00:23:07.809 { 00:23:07.809 "subsystem": "keyring", 00:23:07.809 "config": [] 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "subsystem": "iobuf", 00:23:07.809 "config": [ 00:23:07.809 { 00:23:07.809 "method": "iobuf_set_options", 00:23:07.809 "params": { 00:23:07.809 "small_pool_count": 8192, 00:23:07.809 "large_pool_count": 1024, 00:23:07.809 "small_bufsize": 8192, 00:23:07.809 "large_bufsize": 135168 00:23:07.809 } 00:23:07.809 } 00:23:07.809 ] 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "subsystem": "sock", 00:23:07.809 "config": [ 00:23:07.809 { 00:23:07.809 "method": "sock_set_default_impl", 00:23:07.809 "params": { 00:23:07.809 "impl_name": "posix" 00:23:07.809 } 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "method": "sock_impl_set_options", 00:23:07.809 "params": { 00:23:07.809 "impl_name": "ssl", 00:23:07.809 "recv_buf_size": 4096, 00:23:07.809 "send_buf_size": 4096, 00:23:07.809 "enable_recv_pipe": true, 00:23:07.809 "enable_quickack": false, 00:23:07.809 "enable_placement_id": 0, 00:23:07.809 "enable_zerocopy_send_server": true, 00:23:07.809 "enable_zerocopy_send_client": false, 00:23:07.809 "zerocopy_threshold": 0, 00:23:07.809 "tls_version": 0, 00:23:07.809 "enable_ktls": false 00:23:07.809 } 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "method": "sock_impl_set_options", 00:23:07.809 "params": { 00:23:07.809 "impl_name": "posix", 00:23:07.809 "recv_buf_size": 2097152, 00:23:07.809 "send_buf_size": 2097152, 00:23:07.809 "enable_recv_pipe": true, 00:23:07.809 "enable_quickack": false, 00:23:07.809 "enable_placement_id": 0, 00:23:07.809 "enable_zerocopy_send_server": true, 00:23:07.809 "enable_zerocopy_send_client": false, 00:23:07.809 "zerocopy_threshold": 0, 00:23:07.809 "tls_version": 0, 00:23:07.809 "enable_ktls": false 00:23:07.809 } 00:23:07.809 } 00:23:07.809 ] 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "subsystem": "vmd", 00:23:07.809 "config": [] 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "subsystem": "accel", 00:23:07.809 "config": [ 00:23:07.809 { 00:23:07.809 "method": "accel_set_options", 00:23:07.809 "params": { 00:23:07.809 "small_cache_size": 128, 00:23:07.809 "large_cache_size": 16, 00:23:07.809 "task_count": 2048, 00:23:07.809 "sequence_count": 2048, 00:23:07.809 "buf_count": 2048 00:23:07.809 } 00:23:07.809 } 00:23:07.809 ] 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "subsystem": "bdev", 00:23:07.809 "config": [ 00:23:07.809 { 00:23:07.809 "method": "bdev_set_options", 00:23:07.809 "params": { 00:23:07.809 "bdev_io_pool_size": 65535, 00:23:07.809 "bdev_io_cache_size": 256, 00:23:07.809 "bdev_auto_examine": true, 00:23:07.809 "iobuf_small_cache_size": 128, 00:23:07.809 "iobuf_large_cache_size": 16 00:23:07.809 } 00:23:07.809 }, 00:23:07.809 { 00:23:07.809 "method": "bdev_raid_set_options", 00:23:07.809 "params": { 00:23:07.809 "process_window_size_kb": 1024 00:23:07.809 } 00:23:07.809 }, 00:23:07.809 { 00:23:07.810 "method": "bdev_iscsi_set_options", 00:23:07.810 "params": { 00:23:07.810 "timeout_sec": 30 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "bdev_nvme_set_options", 00:23:07.810 "params": { 00:23:07.810 "action_on_timeout": "none", 00:23:07.810 "timeout_us": 0, 00:23:07.810 "timeout_admin_us": 0, 00:23:07.810 "keep_alive_timeout_ms": 10000, 00:23:07.810 "arbitration_burst": 0, 00:23:07.810 "low_priority_weight": 0, 00:23:07.810 "medium_priority_weight": 0, 00:23:07.810 "high_priority_weight": 0, 00:23:07.810 "nvme_adminq_poll_period_us": 10000, 00:23:07.810 "nvme_ioq_poll_period_us": 0, 00:23:07.810 "io_queue_requests": 0, 00:23:07.810 "delay_cmd_submit": true, 00:23:07.810 "transport_retry_count": 4, 00:23:07.810 "bdev_retry_count": 3, 00:23:07.810 "transport_ack_timeout": 0, 00:23:07.810 "ctrlr_loss_timeout_sec": 0, 00:23:07.810 "reconnect_delay_sec": 0, 00:23:07.810 "fast_io_fail_timeout_sec": 0, 00:23:07.810 "disable_auto_failback": false, 00:23:07.810 "generate_uuids": false, 00:23:07.810 "transport_tos": 0, 00:23:07.810 "nvme_error_stat": false, 00:23:07.810 "rdma_srq_size": 0, 00:23:07.810 "io_path_stat": false, 00:23:07.810 "allow_accel_sequence": false, 00:23:07.810 "rdma_max_cq_size": 0, 00:23:07.810 "rdma_cm_event_timeout_ms": 0, 00:23:07.810 "dhchap_digests": [ 00:23:07.810 "sha256", 00:23:07.810 "sha384", 00:23:07.810 "sha512" 00:23:07.810 ], 00:23:07.810 "dhchap_dhgroups": [ 00:23:07.810 "null", 00:23:07.810 "ffdhe2048", 00:23:07.810 "ffdhe3072", 00:23:07.810 "ffdhe4096", 00:23:07.810 "ffdhe6144", 00:23:07.810 "ffdhe8192" 00:23:07.810 ] 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "bdev_nvme_set_hotplug", 00:23:07.810 "params": { 00:23:07.810 "period_us": 100000, 00:23:07.810 "enable": false 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "bdev_malloc_create", 00:23:07.810 "params": { 00:23:07.810 "name": "malloc0", 00:23:07.810 "num_blocks": 8192, 00:23:07.810 "block_size": 4096, 00:23:07.810 "physical_block_size": 4096, 00:23:07.810 "uuid": "efb8613b-6aad-42c7-ad1d-e3ca363ef9fe", 00:23:07.810 "optimal_io_boundary": 0 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "bdev_wait_for_examine" 00:23:07.810 } 00:23:07.810 ] 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "subsystem": "nbd", 00:23:07.810 "config": [] 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "subsystem": "scheduler", 00:23:07.810 "config": [ 00:23:07.810 { 00:23:07.810 "method": "framework_set_scheduler", 00:23:07.810 "params": { 00:23:07.810 "name": "static" 00:23:07.810 } 00:23:07.810 } 00:23:07.810 ] 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "subsystem": "nvmf", 00:23:07.810 "config": [ 00:23:07.810 { 00:23:07.810 "method": "nvmf_set_config", 00:23:07.810 "params": { 00:23:07.810 "discovery_filter": "match_any", 00:23:07.810 "admin_cmd_passthru": { 00:23:07.810 "identify_ctrlr": false 00:23:07.810 } 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_set_max_subsystems", 00:23:07.810 "params": { 00:23:07.810 "max_subsystems": 1024 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_set_crdt", 00:23:07.810 "params": { 00:23:07.810 "crdt1": 0, 00:23:07.810 "crdt2": 0, 00:23:07.810 "crdt3": 0 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_create_transport", 00:23:07.810 "params": { 00:23:07.810 "trtype": "TCP", 00:23:07.810 "max_queue_depth": 128, 00:23:07.810 "max_io_qpairs_per_ctrlr": 127, 00:23:07.810 "in_capsule_data_size": 4096, 00:23:07.810 "max_io_size": 131072, 00:23:07.810 "io_unit_size": 131072, 00:23:07.810 "max_aq_depth": 128, 00:23:07.810 "num_shared_buffers": 511, 00:23:07.810 "buf_cache_size": 4294967295, 00:23:07.810 "dif_insert_or_strip": false, 00:23:07.810 "zcopy": false, 00:23:07.810 "c2h_success": false, 00:23:07.810 "sock_priority": 0, 00:23:07.810 "abort_timeout_sec": 1, 00:23:07.810 "ack_timeout": 0, 00:23:07.810 "data_wr_pool_size": 0 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_create_subsystem", 00:23:07.810 "params": { 00:23:07.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.810 "allow_any_host": false, 00:23:07.810 "serial_number": "SPDK00000000000001", 00:23:07.810 "model_number": "SPDK bdev Controller", 00:23:07.810 "max_namespaces": 10, 00:23:07.810 "min_cntlid": 1, 00:23:07.810 "max_cntlid": 65519, 00:23:07.810 "ana_reporting": false 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_subsystem_add_host", 00:23:07.810 "params": { 00:23:07.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.810 "host": "nqn.2016-06.io.spdk:host1", 00:23:07.810 "psk": "/tmp/tmp.RLP3HbYrEz" 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_subsystem_add_ns", 00:23:07.810 "params": { 00:23:07.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.810 "namespace": { 00:23:07.810 "nsid": 1, 00:23:07.810 "bdev_name": "malloc0", 00:23:07.810 "nguid": "EFB8613B6AAD42C7AD1DE3CA363EF9FE", 00:23:07.810 "uuid": "efb8613b-6aad-42c7-ad1d-e3ca363ef9fe", 00:23:07.810 "no_auto_visible": false 00:23:07.810 } 00:23:07.810 } 00:23:07.810 }, 00:23:07.810 { 00:23:07.810 "method": "nvmf_subsystem_add_listener", 00:23:07.810 "params": { 00:23:07.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.810 "listen_address": { 00:23:07.810 "trtype": "TCP", 00:23:07.810 "adrfam": "IPv4", 00:23:07.810 "traddr": "10.0.0.2", 00:23:07.810 "trsvcid": "4420" 00:23:07.810 }, 00:23:07.810 "secure_channel": true 00:23:07.810 } 00:23:07.810 } 00:23:07.810 ] 00:23:07.810 } 00:23:07.810 ] 00:23:07.810 }' 00:23:07.810 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:08.068 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:08.068 "subsystems": [ 00:23:08.068 { 00:23:08.068 "subsystem": "keyring", 00:23:08.068 "config": [] 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "subsystem": "iobuf", 00:23:08.068 "config": [ 00:23:08.068 { 00:23:08.068 "method": "iobuf_set_options", 00:23:08.068 "params": { 00:23:08.068 "small_pool_count": 8192, 00:23:08.068 "large_pool_count": 1024, 00:23:08.068 "small_bufsize": 8192, 00:23:08.068 "large_bufsize": 135168 00:23:08.068 } 00:23:08.068 } 00:23:08.068 ] 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "subsystem": "sock", 00:23:08.068 "config": [ 00:23:08.068 { 00:23:08.068 "method": "sock_set_default_impl", 00:23:08.068 "params": { 00:23:08.068 "impl_name": "posix" 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "sock_impl_set_options", 00:23:08.068 "params": { 00:23:08.068 "impl_name": "ssl", 00:23:08.068 "recv_buf_size": 4096, 00:23:08.068 "send_buf_size": 4096, 00:23:08.068 "enable_recv_pipe": true, 00:23:08.068 "enable_quickack": false, 00:23:08.068 "enable_placement_id": 0, 00:23:08.068 "enable_zerocopy_send_server": true, 00:23:08.068 "enable_zerocopy_send_client": false, 00:23:08.068 "zerocopy_threshold": 0, 00:23:08.068 "tls_version": 0, 00:23:08.068 "enable_ktls": false 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "sock_impl_set_options", 00:23:08.068 "params": { 00:23:08.068 "impl_name": "posix", 00:23:08.068 "recv_buf_size": 2097152, 00:23:08.068 "send_buf_size": 2097152, 00:23:08.068 "enable_recv_pipe": true, 00:23:08.068 "enable_quickack": false, 00:23:08.068 "enable_placement_id": 0, 00:23:08.068 "enable_zerocopy_send_server": true, 00:23:08.068 "enable_zerocopy_send_client": false, 00:23:08.068 "zerocopy_threshold": 0, 00:23:08.068 "tls_version": 0, 00:23:08.068 "enable_ktls": false 00:23:08.068 } 00:23:08.068 } 00:23:08.068 ] 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "subsystem": "vmd", 00:23:08.068 "config": [] 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "subsystem": "accel", 00:23:08.068 "config": [ 00:23:08.068 { 00:23:08.068 "method": "accel_set_options", 00:23:08.068 "params": { 00:23:08.068 "small_cache_size": 128, 00:23:08.068 "large_cache_size": 16, 00:23:08.068 "task_count": 2048, 00:23:08.068 "sequence_count": 2048, 00:23:08.068 "buf_count": 2048 00:23:08.068 } 00:23:08.068 } 00:23:08.068 ] 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "subsystem": "bdev", 00:23:08.068 "config": [ 00:23:08.068 { 00:23:08.068 "method": "bdev_set_options", 00:23:08.068 "params": { 00:23:08.068 "bdev_io_pool_size": 65535, 00:23:08.068 "bdev_io_cache_size": 256, 00:23:08.068 "bdev_auto_examine": true, 00:23:08.068 "iobuf_small_cache_size": 128, 00:23:08.068 "iobuf_large_cache_size": 16 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "bdev_raid_set_options", 00:23:08.068 "params": { 00:23:08.068 "process_window_size_kb": 1024 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "bdev_iscsi_set_options", 00:23:08.068 "params": { 00:23:08.068 "timeout_sec": 30 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "bdev_nvme_set_options", 00:23:08.068 "params": { 00:23:08.068 "action_on_timeout": "none", 00:23:08.068 "timeout_us": 0, 00:23:08.068 "timeout_admin_us": 0, 00:23:08.068 "keep_alive_timeout_ms": 10000, 00:23:08.068 "arbitration_burst": 0, 00:23:08.068 "low_priority_weight": 0, 00:23:08.068 "medium_priority_weight": 0, 00:23:08.068 "high_priority_weight": 0, 00:23:08.068 "nvme_adminq_poll_period_us": 10000, 00:23:08.068 "nvme_ioq_poll_period_us": 0, 00:23:08.068 "io_queue_requests": 512, 00:23:08.068 "delay_cmd_submit": true, 00:23:08.068 "transport_retry_count": 4, 00:23:08.068 "bdev_retry_count": 3, 00:23:08.068 "transport_ack_timeout": 0, 00:23:08.068 "ctrlr_loss_timeout_sec": 0, 00:23:08.068 "reconnect_delay_sec": 0, 00:23:08.068 "fast_io_fail_timeout_sec": 0, 00:23:08.068 "disable_auto_failback": false, 00:23:08.068 "generate_uuids": false, 00:23:08.068 "transport_tos": 0, 00:23:08.068 "nvme_error_stat": false, 00:23:08.068 "rdma_srq_size": 0, 00:23:08.068 "io_path_stat": false, 00:23:08.068 "allow_accel_sequence": false, 00:23:08.068 "rdma_max_cq_size": 0, 00:23:08.068 "rdma_cm_event_timeout_ms": 0, 00:23:08.068 "dhchap_digests": [ 00:23:08.068 "sha256", 00:23:08.068 "sha384", 00:23:08.068 "sha512" 00:23:08.068 ], 00:23:08.068 "dhchap_dhgroups": [ 00:23:08.068 "null", 00:23:08.068 "ffdhe2048", 00:23:08.068 "ffdhe3072", 00:23:08.068 "ffdhe4096", 00:23:08.068 "ffdhe6144", 00:23:08.068 "ffdhe8192" 00:23:08.068 ] 00:23:08.068 } 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "method": "bdev_nvme_attach_controller", 00:23:08.068 "params": { 00:23:08.068 "name": "TLSTEST", 00:23:08.068 "trtype": "TCP", 00:23:08.068 "adrfam": "IPv4", 00:23:08.068 "traddr": "10.0.0.2", 00:23:08.068 "trsvcid": "4420", 00:23:08.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.068 "prchk_reftag": false, 00:23:08.068 "prchk_guard": false, 00:23:08.068 "ctrlr_loss_timeout_sec": 0, 00:23:08.068 "reconnect_delay_sec": 0, 00:23:08.068 "fast_io_fail_timeout_sec": 0, 00:23:08.068 "psk": "/tmp/tmp.RLP3HbYrEz", 00:23:08.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.069 "hdgst": false, 00:23:08.069 "ddgst": false 00:23:08.069 } 00:23:08.069 }, 00:23:08.069 { 00:23:08.069 "method": "bdev_nvme_set_hotplug", 00:23:08.069 "params": { 00:23:08.069 "period_us": 100000, 00:23:08.069 "enable": false 00:23:08.069 } 00:23:08.069 }, 00:23:08.069 { 00:23:08.069 "method": "bdev_wait_for_examine" 00:23:08.069 } 00:23:08.069 ] 00:23:08.069 }, 00:23:08.069 { 00:23:08.069 "subsystem": "nbd", 00:23:08.069 "config": [] 00:23:08.069 } 00:23:08.069 ] 00:23:08.069 }' 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 828367 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 828367 ']' 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 828367 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828367 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828367' 00:23:08.069 killing process with pid 828367 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 828367 00:23:08.069 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.069 00:23:08.069 Latency(us) 00:23:08.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.069 =================================================================================================================== 00:23:08.069 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.069 [2024-07-25 00:03:13.746472] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.069 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 828367 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 828154 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 828154 ']' 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 828154 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.326 00:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828154 00:23:08.326 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:08.326 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:08.326 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828154' 00:23:08.326 killing process with pid 828154 00:23:08.326 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 828154 00:23:08.326 [2024-07-25 00:03:14.005206] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:08.326 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 828154 00:23:08.584 00:03:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:08.584 00:03:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.584 00:03:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:08.584 "subsystems": [ 00:23:08.584 { 00:23:08.584 "subsystem": "keyring", 00:23:08.584 "config": [] 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "subsystem": "iobuf", 00:23:08.584 "config": [ 00:23:08.584 { 00:23:08.584 "method": "iobuf_set_options", 00:23:08.584 "params": { 00:23:08.584 "small_pool_count": 8192, 00:23:08.584 "large_pool_count": 1024, 00:23:08.584 "small_bufsize": 8192, 00:23:08.584 "large_bufsize": 135168 00:23:08.584 } 00:23:08.584 } 00:23:08.584 ] 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "subsystem": "sock", 00:23:08.584 "config": [ 00:23:08.584 { 00:23:08.584 "method": "sock_set_default_impl", 00:23:08.584 "params": { 00:23:08.584 "impl_name": "posix" 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "sock_impl_set_options", 00:23:08.584 "params": { 00:23:08.584 "impl_name": "ssl", 00:23:08.584 "recv_buf_size": 4096, 00:23:08.584 "send_buf_size": 4096, 00:23:08.584 "enable_recv_pipe": true, 00:23:08.584 "enable_quickack": false, 00:23:08.584 "enable_placement_id": 0, 00:23:08.584 "enable_zerocopy_send_server": true, 00:23:08.584 "enable_zerocopy_send_client": false, 00:23:08.584 "zerocopy_threshold": 0, 00:23:08.584 "tls_version": 0, 00:23:08.584 "enable_ktls": false 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "sock_impl_set_options", 00:23:08.584 "params": { 00:23:08.584 "impl_name": "posix", 00:23:08.584 "recv_buf_size": 2097152, 00:23:08.584 "send_buf_size": 2097152, 00:23:08.584 "enable_recv_pipe": true, 00:23:08.584 "enable_quickack": false, 00:23:08.584 "enable_placement_id": 0, 00:23:08.584 "enable_zerocopy_send_server": true, 00:23:08.584 "enable_zerocopy_send_client": false, 00:23:08.584 "zerocopy_threshold": 0, 00:23:08.584 "tls_version": 0, 00:23:08.584 "enable_ktls": false 00:23:08.584 } 00:23:08.584 } 00:23:08.584 ] 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "subsystem": "vmd", 00:23:08.584 "config": [] 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "subsystem": "accel", 00:23:08.584 "config": [ 00:23:08.584 { 00:23:08.584 "method": "accel_set_options", 00:23:08.584 "params": { 00:23:08.584 "small_cache_size": 128, 00:23:08.584 "large_cache_size": 16, 00:23:08.584 "task_count": 2048, 00:23:08.584 "sequence_count": 2048, 00:23:08.584 "buf_count": 2048 00:23:08.584 } 00:23:08.584 } 00:23:08.584 ] 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "subsystem": "bdev", 00:23:08.584 "config": [ 00:23:08.584 { 00:23:08.584 "method": "bdev_set_options", 00:23:08.584 "params": { 00:23:08.584 "bdev_io_pool_size": 65535, 00:23:08.584 "bdev_io_cache_size": 256, 00:23:08.584 "bdev_auto_examine": true, 00:23:08.584 "iobuf_small_cache_size": 128, 00:23:08.584 "iobuf_large_cache_size": 16 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_raid_set_options", 00:23:08.584 "params": { 00:23:08.584 "process_window_size_kb": 1024 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_iscsi_set_options", 00:23:08.584 "params": { 00:23:08.584 "timeout_sec": 30 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_nvme_set_options", 00:23:08.584 "params": { 00:23:08.584 "action_on_timeout": "none", 00:23:08.584 "timeout_us": 0, 00:23:08.584 "timeout_admin_us": 0, 00:23:08.584 "keep_alive_timeout_ms": 10000, 00:23:08.584 "arbitration_burst": 0, 00:23:08.584 "low_priority_weight": 0, 00:23:08.584 "medium_priority_weight": 0, 00:23:08.584 "high_priority_weight": 0, 00:23:08.584 "nvme_adminq_poll_period_us": 10000, 00:23:08.584 "nvme_ioq_poll_period_us": 0, 00:23:08.584 "io_queue_requests": 0, 00:23:08.584 "delay_cmd_submit": true, 00:23:08.584 "transport_retry_count": 4, 00:23:08.584 "bdev_retry_count": 3, 00:23:08.584 "transport_ack_timeout": 0, 00:23:08.584 "ctrlr_loss_timeout_sec": 0, 00:23:08.584 "reconnect_delay_sec": 0, 00:23:08.584 "fast_io_fail_timeout_sec": 0, 00:23:08.584 "disable_auto_failback": false, 00:23:08.584 "generate_uuids": false, 00:23:08.584 "transport_tos": 0, 00:23:08.584 "nvme_error_stat": false, 00:23:08.584 "rdma_srq_size": 0, 00:23:08.584 "io_path_stat": false, 00:23:08.584 "allow_accel_sequence": false, 00:23:08.584 "rdma_max_cq_size": 0, 00:23:08.584 "rdma_cm_event_timeout_ms": 0, 00:23:08.584 "dhchap_digests": [ 00:23:08.584 "sha256", 00:23:08.584 "sha384", 00:23:08.584 "sha512" 00:23:08.584 ], 00:23:08.584 "dhchap_dhgroups": [ 00:23:08.584 "null", 00:23:08.584 "ffdhe2048", 00:23:08.584 "ffdhe3072", 00:23:08.584 "ffdhe4096", 00:23:08.584 "ffdhe6144", 00:23:08.584 "ffdhe8192" 00:23:08.584 ] 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_nvme_set_hotplug", 00:23:08.584 "params": { 00:23:08.584 "period_us": 100000, 00:23:08.584 "enable": false 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_malloc_create", 00:23:08.584 "params": { 00:23:08.584 "name": "malloc0", 00:23:08.584 "num_blocks": 8192, 00:23:08.584 "block_size": 4096, 00:23:08.584 "physical_block_size": 4096, 00:23:08.584 "uuid": "efb8613b-6aad-42c7-ad1d-e3ca363ef9fe", 00:23:08.584 "optimal_io_boundary": 0 00:23:08.584 } 00:23:08.584 }, 00:23:08.584 { 00:23:08.584 "method": "bdev_wait_for_examine" 00:23:08.585 } 00:23:08.585 ] 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "subsystem": "nbd", 00:23:08.585 "config": [] 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "subsystem": "scheduler", 00:23:08.585 "config": [ 00:23:08.585 { 00:23:08.585 "method": "framework_set_scheduler", 00:23:08.585 "params": { 00:23:08.585 "name": "static" 00:23:08.585 } 00:23:08.585 } 00:23:08.585 ] 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "subsystem": "nvmf", 00:23:08.585 "config": [ 00:23:08.585 { 00:23:08.585 "method": "nvmf_set_config", 00:23:08.585 "params": { 00:23:08.585 "discovery_filter": "match_any", 00:23:08.585 "admin_cmd_passthru": { 00:23:08.585 "identify_ctrlr": false 00:23:08.585 } 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_set_max_subsystems", 00:23:08.585 "params": { 00:23:08.585 "max_subsystems": 1024 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_set_crdt", 00:23:08.585 "params": { 00:23:08.585 "crdt1": 0, 00:23:08.585 "crdt2": 0, 00:23:08.585 "crdt3": 0 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_create_transport", 00:23:08.585 "params": { 00:23:08.585 "trtype": "TCP", 00:23:08.585 "max_queue_depth": 128, 00:23:08.585 "max_io_qpairs_per_ctrlr": 127, 00:23:08.585 "in_capsule_data_size": 4096, 00:23:08.585 "max_io_size": 131072, 00:23:08.585 "io_unit_size": 131072, 00:23:08.585 "max_aq_depth": 128, 00:23:08.585 "num_shared_buffers": 511, 00:23:08.585 "buf_cache_size": 4294967295, 00:23:08.585 "dif_insert_or_strip": false, 00:23:08.585 "zcopy": false, 00:23:08.585 "c2h_success": false, 00:23:08.585 "sock_priority": 0, 00:23:08.585 "abort_timeout_sec": 1, 00:23:08.585 "ack_timeout": 0, 00:23:08.585 "data_wr_pool_size": 0 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_create_subsystem", 00:23:08.585 "params": { 00:23:08.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.585 "allow_any_host": false, 00:23:08.585 "serial_number": "SPDK00000000000001", 00:23:08.585 "model_number": "SPDK bdev Controller", 00:23:08.585 "max_namespaces": 10, 00:23:08.585 "min_cntlid": 1, 00:23:08.585 "max_cntlid": 65519, 00:23:08.585 "ana_reporting": false 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_subsystem_add_host", 00:23:08.585 "params": { 00:23:08.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.585 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.585 "psk": "/tmp/tmp.RLP3HbYrEz" 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_subsystem_add_ns", 00:23:08.585 "params": { 00:23:08.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.585 "namespace": { 00:23:08.585 "nsid": 1, 00:23:08.585 "bdev_name": "malloc0", 00:23:08.585 "nguid": "EFB8613B6AAD42C7AD1DE3CA363EF9FE", 00:23:08.585 "uuid": "efb8613b-6aad-42c7-ad1d-e3ca363ef9fe", 00:23:08.585 "no_auto_visible": false 00:23:08.585 } 00:23:08.585 } 00:23:08.585 }, 00:23:08.585 { 00:23:08.585 "method": "nvmf_subsystem_add_listener", 00:23:08.585 "params": { 00:23:08.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.585 "listen_address": { 00:23:08.585 "trtype": "TCP", 00:23:08.585 "adrfam": "IPv4", 00:23:08.585 "traddr": "10.0.0.2", 00:23:08.585 "trsvcid": "4420" 00:23:08.585 }, 00:23:08.585 "secure_channel": true 00:23:08.585 } 00:23:08.585 } 00:23:08.585 ] 00:23:08.585 } 00:23:08.585 ] 00:23:08.585 }' 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=828642 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 828642 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 828642 ']' 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.585 00:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.585 [2024-07-25 00:03:14.299035] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:08.585 [2024-07-25 00:03:14.299140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.843 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.843 [2024-07-25 00:03:14.367330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.843 [2024-07-25 00:03:14.456488] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.843 [2024-07-25 00:03:14.456537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.843 [2024-07-25 00:03:14.456570] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.843 [2024-07-25 00:03:14.456582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.843 [2024-07-25 00:03:14.456592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.843 [2024-07-25 00:03:14.456668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.101 [2024-07-25 00:03:14.690448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.101 [2024-07-25 00:03:14.706425] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.101 [2024-07-25 00:03:14.722483] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.101 [2024-07-25 00:03:14.739286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=828765 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 828765 /var/tmp/bdevperf.sock 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 828765 ']' 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.667 00:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:09.667 "subsystems": [ 00:23:09.667 { 00:23:09.667 "subsystem": "keyring", 00:23:09.667 "config": [] 00:23:09.667 }, 00:23:09.667 { 00:23:09.667 "subsystem": "iobuf", 00:23:09.667 "config": [ 00:23:09.667 { 00:23:09.667 "method": "iobuf_set_options", 00:23:09.667 "params": { 00:23:09.667 "small_pool_count": 8192, 00:23:09.667 "large_pool_count": 1024, 00:23:09.667 "small_bufsize": 8192, 00:23:09.667 "large_bufsize": 135168 00:23:09.667 } 00:23:09.667 } 00:23:09.667 ] 00:23:09.667 }, 00:23:09.667 { 00:23:09.667 "subsystem": "sock", 00:23:09.667 "config": [ 00:23:09.667 { 00:23:09.667 "method": "sock_set_default_impl", 00:23:09.667 "params": { 00:23:09.667 "impl_name": "posix" 00:23:09.667 } 00:23:09.667 }, 00:23:09.667 { 00:23:09.667 "method": "sock_impl_set_options", 00:23:09.667 "params": { 00:23:09.667 "impl_name": "ssl", 00:23:09.667 "recv_buf_size": 4096, 00:23:09.667 "send_buf_size": 4096, 00:23:09.667 "enable_recv_pipe": true, 00:23:09.667 "enable_quickack": false, 00:23:09.667 "enable_placement_id": 0, 00:23:09.667 "enable_zerocopy_send_server": true, 00:23:09.667 "enable_zerocopy_send_client": false, 00:23:09.667 "zerocopy_threshold": 0, 00:23:09.667 "tls_version": 0, 00:23:09.667 "enable_ktls": false 00:23:09.667 } 00:23:09.667 }, 00:23:09.668 { 00:23:09.668 "method": "sock_impl_set_options", 00:23:09.668 "params": { 00:23:09.668 "impl_name": "posix", 00:23:09.668 "recv_buf_size": 2097152, 00:23:09.668 "send_buf_size": 2097152, 00:23:09.668 "enable_recv_pipe": true, 00:23:09.668 "enable_quickack": false, 00:23:09.668 "enable_placement_id": 0, 00:23:09.668 "enable_zerocopy_send_server": true, 00:23:09.668 "enable_zerocopy_send_client": false, 00:23:09.668 "zerocopy_threshold": 0, 00:23:09.668 "tls_version": 0, 00:23:09.668 "enable_ktls": false 00:23:09.668 } 00:23:09.668 } 00:23:09.668 ] 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "subsystem": "vmd", 00:23:09.668 "config": [] 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "subsystem": "accel", 00:23:09.668 "config": [ 00:23:09.668 { 00:23:09.668 "method": "accel_set_options", 00:23:09.668 "params": { 00:23:09.668 "small_cache_size": 128, 00:23:09.668 "large_cache_size": 16, 00:23:09.668 "task_count": 2048, 00:23:09.668 "sequence_count": 2048, 00:23:09.668 "buf_count": 2048 00:23:09.668 } 00:23:09.668 } 00:23:09.668 ] 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "subsystem": "bdev", 00:23:09.668 "config": [ 00:23:09.668 { 00:23:09.668 "method": "bdev_set_options", 00:23:09.668 "params": { 00:23:09.668 "bdev_io_pool_size": 65535, 00:23:09.668 "bdev_io_cache_size": 256, 00:23:09.668 "bdev_auto_examine": true, 00:23:09.668 "iobuf_small_cache_size": 128, 00:23:09.668 "iobuf_large_cache_size": 16 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_raid_set_options", 00:23:09.668 "params": { 00:23:09.668 "process_window_size_kb": 1024 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_iscsi_set_options", 00:23:09.668 "params": { 00:23:09.668 "timeout_sec": 30 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_nvme_set_options", 00:23:09.668 "params": { 00:23:09.668 "action_on_timeout": "none", 00:23:09.668 "timeout_us": 0, 00:23:09.668 "timeout_admin_us": 0, 00:23:09.668 "keep_alive_timeout_ms": 10000, 00:23:09.668 "arbitration_burst": 0, 00:23:09.668 "low_priority_weight": 0, 00:23:09.668 "medium_priority_weight": 0, 00:23:09.668 "high_priority_weight": 0, 00:23:09.668 "nvme_adminq_poll_period_us": 10000, 00:23:09.668 "nvme_ioq_poll_period_us": 0, 00:23:09.668 "io_queue_requests": 512, 00:23:09.668 "delay_cmd_submit": true, 00:23:09.668 "transport_retry_count": 4, 00:23:09.668 "bdev_retry_count": 3, 00:23:09.668 "transport_ack_timeout": 0, 00:23:09.668 "ctrlr_loss_timeout_sec": 0, 00:23:09.668 "reconnect_delay_sec": 0, 00:23:09.668 "fast_io_fail_timeout_sec": 0, 00:23:09.668 "disable_auto_failback": false, 00:23:09.668 "generate_uuids": false, 00:23:09.668 "transport_tos": 0, 00:23:09.668 "nvme_error_stat": false, 00:23:09.668 "rdma_srq_size": 0, 00:23:09.668 "io_path_stat": false, 00:23:09.668 "allow_accel_sequence": false, 00:23:09.668 "rdma_max_cq_size": 0, 00:23:09.668 "rdma_cm_event_timeout_ms": 0, 00:23:09.668 "dhchap_digests": [ 00:23:09.668 "sha256", 00:23:09.668 "sha384", 00:23:09.668 "sha512" 00:23:09.668 ], 00:23:09.668 "dhchap_dhgroups": [ 00:23:09.668 "null", 00:23:09.668 "ffdhe2048", 00:23:09.668 "ffdhe3072", 00:23:09.668 "ffdhe4096", 00:23:09.668 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.668 he6144", 00:23:09.668 "ffdhe8192" 00:23:09.668 ] 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_nvme_attach_controller", 00:23:09.668 "params": { 00:23:09.668 "name": "TLSTEST", 00:23:09.668 "trtype": "TCP", 00:23:09.668 "adrfam": "IPv4", 00:23:09.668 "traddr": "10.0.0.2", 00:23:09.668 "trsvcid": "4420", 00:23:09.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.668 "prchk_reftag": false, 00:23:09.668 "prchk_guard": false, 00:23:09.668 "ctrlr_loss_timeout_sec": 0, 00:23:09.668 "reconnect_delay_sec": 0, 00:23:09.668 "fast_io_fail_timeout_sec": 0, 00:23:09.668 "psk": "/tmp/tmp.RLP3HbYrEz", 00:23:09.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.668 "hdgst": false, 00:23:09.668 "ddgst": false 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_nvme_set_hotplug", 00:23:09.668 "params": { 00:23:09.668 "period_us": 100000, 00:23:09.668 "enable": false 00:23:09.668 } 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "method": "bdev_wait_for_examine" 00:23:09.668 } 00:23:09.668 ] 00:23:09.668 }, 00:23:09.668 { 00:23:09.668 "subsystem": "nbd", 00:23:09.668 "config": [] 00:23:09.668 } 00:23:09.668 ] 00:23:09.668 }' 00:23:09.668 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.668 00:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.668 [2024-07-25 00:03:15.303581] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:09.668 [2024-07-25 00:03:15.303660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828765 ] 00:23:09.668 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.668 [2024-07-25 00:03:15.360545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.926 [2024-07-25 00:03:15.447063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.926 [2024-07-25 00:03:15.616635] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.926 [2024-07-25 00:03:15.616794] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.860 00:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.860 00:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.860 00:03:16 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:10.860 Running I/O for 10 seconds... 00:23:20.830 00:23:20.830 Latency(us) 00:23:20.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.830 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.830 Verification LBA range: start 0x0 length 0x2000 00:23:20.830 TLSTESTn1 : 10.05 2600.71 10.16 0.00 0.00 49075.70 6456.51 75342.13 00:23:20.830 =================================================================================================================== 00:23:20.830 Total : 2600.71 10.16 0.00 0.00 49075.70 6456.51 75342.13 00:23:20.830 0 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 828765 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 828765 ']' 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 828765 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828765 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828765' 00:23:20.830 killing process with pid 828765 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 828765 00:23:20.830 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.830 00:23:20.830 Latency(us) 00:23:20.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.830 =================================================================================================================== 00:23:20.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.830 [2024-07-25 00:03:26.475949] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.830 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 828765 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 828642 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 828642 ']' 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 828642 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828642 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828642' 00:23:21.088 killing process with pid 828642 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 828642 00:23:21.088 [2024-07-25 00:03:26.727936] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:21.088 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 828642 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=830127 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 830127 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 830127 ']' 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.346 00:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 [2024-07-25 00:03:27.019171] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:21.346 [2024-07-25 00:03:27.019250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.346 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.604 [2024-07-25 00:03:27.097364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.604 [2024-07-25 00:03:27.191077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.604 [2024-07-25 00:03:27.191160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.604 [2024-07-25 00:03:27.191201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.604 [2024-07-25 00:03:27.191224] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.604 [2024-07-25 00:03:27.191245] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.604 [2024-07-25 00:03:27.191284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.RLP3HbYrEz 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RLP3HbYrEz 00:23:21.862 00:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.119 [2024-07-25 00:03:27.596798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.119 00:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.377 00:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.377 [2024-07-25 00:03:28.082171] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.377 [2024-07-25 00:03:28.082461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.634 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.634 malloc0 00:23:22.634 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.892 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RLP3HbYrEz 00:23:23.150 [2024-07-25 00:03:28.790237] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=830296 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 830296 /var/tmp/bdevperf.sock 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 830296 ']' 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.150 00:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.150 [2024-07-25 00:03:28.853281] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:23.150 [2024-07-25 00:03:28.853368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830296 ] 00:23:23.408 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.409 [2024-07-25 00:03:28.916616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.409 [2024-07-25 00:03:29.004168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.409 00:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.409 00:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:23.409 00:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RLP3HbYrEz 00:23:23.667 00:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:23.925 [2024-07-25 00:03:29.579304] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.183 nvme0n1 00:23:24.183 00:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.183 Running I/O for 1 seconds... 00:23:25.143 00:23:25.143 Latency(us) 00:23:25.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.143 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:25.143 Verification LBA range: start 0x0 length 0x2000 00:23:25.143 nvme0n1 : 1.04 2355.47 9.20 0.00 0.00 53284.53 6213.78 96313.65 00:23:25.143 =================================================================================================================== 00:23:25.143 Total : 2355.47 9.20 0.00 0.00 53284.53 6213.78 96313.65 00:23:25.143 0 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 830296 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 830296 ']' 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 830296 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.143 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 830296 00:23:25.400 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:25.400 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:25.400 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 830296' 00:23:25.400 killing process with pid 830296 00:23:25.400 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 830296 00:23:25.400 Received shutdown signal, test time was about 1.000000 seconds 00:23:25.400 00:23:25.400 Latency(us) 00:23:25.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.400 =================================================================================================================== 00:23:25.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.400 00:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 830296 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 830127 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 830127 ']' 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 830127 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.400 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 830127 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 830127' 00:23:25.658 killing process with pid 830127 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 830127 00:23:25.658 [2024-07-25 00:03:31.121919] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 830127 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=830683 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 830683 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 830683 ']' 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:25.658 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.916 [2024-07-25 00:03:31.414845] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:25.916 [2024-07-25 00:03:31.414925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.916 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.916 [2024-07-25 00:03:31.483980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.916 [2024-07-25 00:03:31.574807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.916 [2024-07-25 00:03:31.574869] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.916 [2024-07-25 00:03:31.574886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.916 [2024-07-25 00:03:31.574900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.916 [2024-07-25 00:03:31.574912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.916 [2024-07-25 00:03:31.574942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.175 [2024-07-25 00:03:31.729982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.175 malloc0 00:23:26.175 [2024-07-25 00:03:31.763212] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.175 [2024-07-25 00:03:31.763525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=830710 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 830710 /var/tmp/bdevperf.sock 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 830710 ']' 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.175 00:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.175 [2024-07-25 00:03:31.837014] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:26.175 [2024-07-25 00:03:31.837093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830710 ] 00:23:26.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.433 [2024-07-25 00:03:31.903585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.433 [2024-07-25 00:03:31.994675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.433 00:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.433 00:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.433 00:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RLP3HbYrEz 00:23:26.691 00:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:26.949 [2024-07-25 00:03:32.628312] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.206 nvme0n1 00:23:27.206 00:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.206 Running I/O for 1 seconds... 00:23:28.578 00:23:28.578 Latency(us) 00:23:28.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.578 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.578 Verification LBA range: start 0x0 length 0x2000 00:23:28.578 nvme0n1 : 1.04 1565.78 6.12 0.00 0.00 80663.91 7427.41 86992.97 00:23:28.578 =================================================================================================================== 00:23:28.578 Total : 1565.78 6.12 0.00 0.00 80663.91 7427.41 86992.97 00:23:28.578 0 00:23:28.578 00:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:28.578 00:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.578 00:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.578 00:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.578 00:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:28.578 "subsystems": [ 00:23:28.578 { 00:23:28.578 "subsystem": "keyring", 00:23:28.578 "config": [ 00:23:28.578 { 00:23:28.578 "method": "keyring_file_add_key", 00:23:28.578 "params": { 00:23:28.578 "name": "key0", 00:23:28.578 "path": "/tmp/tmp.RLP3HbYrEz" 00:23:28.578 } 00:23:28.578 } 00:23:28.578 ] 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "subsystem": "iobuf", 00:23:28.578 "config": [ 00:23:28.578 { 00:23:28.578 "method": "iobuf_set_options", 00:23:28.578 "params": { 00:23:28.578 "small_pool_count": 8192, 00:23:28.578 "large_pool_count": 1024, 00:23:28.578 "small_bufsize": 8192, 00:23:28.578 "large_bufsize": 135168 00:23:28.578 } 00:23:28.578 } 00:23:28.578 ] 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "subsystem": "sock", 00:23:28.578 "config": [ 00:23:28.578 { 00:23:28.578 "method": "sock_set_default_impl", 00:23:28.578 "params": { 00:23:28.578 "impl_name": "posix" 00:23:28.578 } 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "method": "sock_impl_set_options", 00:23:28.578 "params": { 00:23:28.578 "impl_name": "ssl", 00:23:28.578 "recv_buf_size": 4096, 00:23:28.578 "send_buf_size": 4096, 00:23:28.578 "enable_recv_pipe": true, 00:23:28.578 "enable_quickack": false, 00:23:28.578 "enable_placement_id": 0, 00:23:28.578 "enable_zerocopy_send_server": true, 00:23:28.578 "enable_zerocopy_send_client": false, 00:23:28.578 "zerocopy_threshold": 0, 00:23:28.578 "tls_version": 0, 00:23:28.578 "enable_ktls": false 00:23:28.578 } 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "method": "sock_impl_set_options", 00:23:28.578 "params": { 00:23:28.578 "impl_name": "posix", 00:23:28.578 "recv_buf_size": 2097152, 00:23:28.578 "send_buf_size": 2097152, 00:23:28.578 "enable_recv_pipe": true, 00:23:28.578 "enable_quickack": false, 00:23:28.578 "enable_placement_id": 0, 00:23:28.578 "enable_zerocopy_send_server": true, 00:23:28.578 "enable_zerocopy_send_client": false, 00:23:28.578 "zerocopy_threshold": 0, 00:23:28.578 "tls_version": 0, 00:23:28.578 "enable_ktls": false 00:23:28.578 } 00:23:28.578 } 00:23:28.578 ] 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "subsystem": "vmd", 00:23:28.578 "config": [] 00:23:28.578 }, 00:23:28.578 { 00:23:28.578 "subsystem": "accel", 00:23:28.578 "config": [ 00:23:28.579 { 00:23:28.579 "method": "accel_set_options", 00:23:28.579 "params": { 00:23:28.579 "small_cache_size": 128, 00:23:28.579 "large_cache_size": 16, 00:23:28.579 "task_count": 2048, 00:23:28.579 "sequence_count": 2048, 00:23:28.579 "buf_count": 2048 00:23:28.579 } 00:23:28.579 } 00:23:28.579 ] 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "subsystem": "bdev", 00:23:28.579 "config": [ 00:23:28.579 { 00:23:28.579 "method": "bdev_set_options", 00:23:28.579 "params": { 00:23:28.579 "bdev_io_pool_size": 65535, 00:23:28.579 "bdev_io_cache_size": 256, 00:23:28.579 "bdev_auto_examine": true, 00:23:28.579 "iobuf_small_cache_size": 128, 00:23:28.579 "iobuf_large_cache_size": 16 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_raid_set_options", 00:23:28.579 "params": { 00:23:28.579 "process_window_size_kb": 1024 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_iscsi_set_options", 00:23:28.579 "params": { 00:23:28.579 "timeout_sec": 30 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_nvme_set_options", 00:23:28.579 "params": { 00:23:28.579 "action_on_timeout": "none", 00:23:28.579 "timeout_us": 0, 00:23:28.579 "timeout_admin_us": 0, 00:23:28.579 "keep_alive_timeout_ms": 10000, 00:23:28.579 "arbitration_burst": 0, 00:23:28.579 "low_priority_weight": 0, 00:23:28.579 "medium_priority_weight": 0, 00:23:28.579 "high_priority_weight": 0, 00:23:28.579 "nvme_adminq_poll_period_us": 10000, 00:23:28.579 "nvme_ioq_poll_period_us": 0, 00:23:28.579 "io_queue_requests": 0, 00:23:28.579 "delay_cmd_submit": true, 00:23:28.579 "transport_retry_count": 4, 00:23:28.579 "bdev_retry_count": 3, 00:23:28.579 "transport_ack_timeout": 0, 00:23:28.579 "ctrlr_loss_timeout_sec": 0, 00:23:28.579 "reconnect_delay_sec": 0, 00:23:28.579 "fast_io_fail_timeout_sec": 0, 00:23:28.579 "disable_auto_failback": false, 00:23:28.579 "generate_uuids": false, 00:23:28.579 "transport_tos": 0, 00:23:28.579 "nvme_error_stat": false, 00:23:28.579 "rdma_srq_size": 0, 00:23:28.579 "io_path_stat": false, 00:23:28.579 "allow_accel_sequence": false, 00:23:28.579 "rdma_max_cq_size": 0, 00:23:28.579 "rdma_cm_event_timeout_ms": 0, 00:23:28.579 "dhchap_digests": [ 00:23:28.579 "sha256", 00:23:28.579 "sha384", 00:23:28.579 "sha512" 00:23:28.579 ], 00:23:28.579 "dhchap_dhgroups": [ 00:23:28.579 "null", 00:23:28.579 "ffdhe2048", 00:23:28.579 "ffdhe3072", 00:23:28.579 "ffdhe4096", 00:23:28.579 "ffdhe6144", 00:23:28.579 "ffdhe8192" 00:23:28.579 ] 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_nvme_set_hotplug", 00:23:28.579 "params": { 00:23:28.579 "period_us": 100000, 00:23:28.579 "enable": false 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_malloc_create", 00:23:28.579 "params": { 00:23:28.579 "name": "malloc0", 00:23:28.579 "num_blocks": 8192, 00:23:28.579 "block_size": 4096, 00:23:28.579 "physical_block_size": 4096, 00:23:28.579 "uuid": "c8b4b3cb-29e1-4123-b1b2-4f87922ef8c9", 00:23:28.579 "optimal_io_boundary": 0 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "bdev_wait_for_examine" 00:23:28.579 } 00:23:28.579 ] 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "subsystem": "nbd", 00:23:28.579 "config": [] 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "subsystem": "scheduler", 00:23:28.579 "config": [ 00:23:28.579 { 00:23:28.579 "method": "framework_set_scheduler", 00:23:28.579 "params": { 00:23:28.579 "name": "static" 00:23:28.579 } 00:23:28.579 } 00:23:28.579 ] 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "subsystem": "nvmf", 00:23:28.579 "config": [ 00:23:28.579 { 00:23:28.579 "method": "nvmf_set_config", 00:23:28.579 "params": { 00:23:28.579 "discovery_filter": "match_any", 00:23:28.579 "admin_cmd_passthru": { 00:23:28.579 "identify_ctrlr": false 00:23:28.579 } 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_set_max_subsystems", 00:23:28.579 "params": { 00:23:28.579 "max_subsystems": 1024 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_set_crdt", 00:23:28.579 "params": { 00:23:28.579 "crdt1": 0, 00:23:28.579 "crdt2": 0, 00:23:28.579 "crdt3": 0 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_create_transport", 00:23:28.579 "params": { 00:23:28.579 "trtype": "TCP", 00:23:28.579 "max_queue_depth": 128, 00:23:28.579 "max_io_qpairs_per_ctrlr": 127, 00:23:28.579 "in_capsule_data_size": 4096, 00:23:28.579 "max_io_size": 131072, 00:23:28.579 "io_unit_size": 131072, 00:23:28.579 "max_aq_depth": 128, 00:23:28.579 "num_shared_buffers": 511, 00:23:28.579 "buf_cache_size": 4294967295, 00:23:28.579 "dif_insert_or_strip": false, 00:23:28.579 "zcopy": false, 00:23:28.579 "c2h_success": false, 00:23:28.579 "sock_priority": 0, 00:23:28.579 "abort_timeout_sec": 1, 00:23:28.579 "ack_timeout": 0, 00:23:28.579 "data_wr_pool_size": 0 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_create_subsystem", 00:23:28.579 "params": { 00:23:28.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.579 "allow_any_host": false, 00:23:28.579 "serial_number": "00000000000000000000", 00:23:28.579 "model_number": "SPDK bdev Controller", 00:23:28.579 "max_namespaces": 32, 00:23:28.579 "min_cntlid": 1, 00:23:28.579 "max_cntlid": 65519, 00:23:28.579 "ana_reporting": false 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_subsystem_add_host", 00:23:28.579 "params": { 00:23:28.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.579 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.579 "psk": "key0" 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_subsystem_add_ns", 00:23:28.579 "params": { 00:23:28.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.579 "namespace": { 00:23:28.579 "nsid": 1, 00:23:28.579 "bdev_name": "malloc0", 00:23:28.579 "nguid": "C8B4B3CB29E14123B1B24F87922EF8C9", 00:23:28.579 "uuid": "c8b4b3cb-29e1-4123-b1b2-4f87922ef8c9", 00:23:28.579 "no_auto_visible": false 00:23:28.579 } 00:23:28.579 } 00:23:28.579 }, 00:23:28.579 { 00:23:28.579 "method": "nvmf_subsystem_add_listener", 00:23:28.579 "params": { 00:23:28.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.579 "listen_address": { 00:23:28.579 "trtype": "TCP", 00:23:28.579 "adrfam": "IPv4", 00:23:28.579 "traddr": "10.0.0.2", 00:23:28.579 "trsvcid": "4420" 00:23:28.579 }, 00:23:28.579 "secure_channel": true 00:23:28.579 } 00:23:28.579 } 00:23:28.579 ] 00:23:28.579 } 00:23:28.579 ] 00:23:28.579 }' 00:23:28.579 00:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:28.838 00:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:28.838 "subsystems": [ 00:23:28.838 { 00:23:28.838 "subsystem": "keyring", 00:23:28.838 "config": [ 00:23:28.838 { 00:23:28.838 "method": "keyring_file_add_key", 00:23:28.838 "params": { 00:23:28.838 "name": "key0", 00:23:28.838 "path": "/tmp/tmp.RLP3HbYrEz" 00:23:28.838 } 00:23:28.838 } 00:23:28.838 ] 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "subsystem": "iobuf", 00:23:28.838 "config": [ 00:23:28.838 { 00:23:28.838 "method": "iobuf_set_options", 00:23:28.838 "params": { 00:23:28.838 "small_pool_count": 8192, 00:23:28.838 "large_pool_count": 1024, 00:23:28.838 "small_bufsize": 8192, 00:23:28.838 "large_bufsize": 135168 00:23:28.838 } 00:23:28.838 } 00:23:28.838 ] 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "subsystem": "sock", 00:23:28.838 "config": [ 00:23:28.838 { 00:23:28.838 "method": "sock_set_default_impl", 00:23:28.838 "params": { 00:23:28.838 "impl_name": "posix" 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "sock_impl_set_options", 00:23:28.838 "params": { 00:23:28.838 "impl_name": "ssl", 00:23:28.838 "recv_buf_size": 4096, 00:23:28.838 "send_buf_size": 4096, 00:23:28.838 "enable_recv_pipe": true, 00:23:28.838 "enable_quickack": false, 00:23:28.838 "enable_placement_id": 0, 00:23:28.838 "enable_zerocopy_send_server": true, 00:23:28.838 "enable_zerocopy_send_client": false, 00:23:28.838 "zerocopy_threshold": 0, 00:23:28.838 "tls_version": 0, 00:23:28.838 "enable_ktls": false 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "sock_impl_set_options", 00:23:28.838 "params": { 00:23:28.838 "impl_name": "posix", 00:23:28.838 "recv_buf_size": 2097152, 00:23:28.838 "send_buf_size": 2097152, 00:23:28.838 "enable_recv_pipe": true, 00:23:28.838 "enable_quickack": false, 00:23:28.838 "enable_placement_id": 0, 00:23:28.838 "enable_zerocopy_send_server": true, 00:23:28.838 "enable_zerocopy_send_client": false, 00:23:28.838 "zerocopy_threshold": 0, 00:23:28.838 "tls_version": 0, 00:23:28.838 "enable_ktls": false 00:23:28.838 } 00:23:28.838 } 00:23:28.838 ] 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "subsystem": "vmd", 00:23:28.838 "config": [] 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "subsystem": "accel", 00:23:28.838 "config": [ 00:23:28.838 { 00:23:28.838 "method": "accel_set_options", 00:23:28.838 "params": { 00:23:28.838 "small_cache_size": 128, 00:23:28.838 "large_cache_size": 16, 00:23:28.838 "task_count": 2048, 00:23:28.838 "sequence_count": 2048, 00:23:28.838 "buf_count": 2048 00:23:28.838 } 00:23:28.838 } 00:23:28.838 ] 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "subsystem": "bdev", 00:23:28.838 "config": [ 00:23:28.838 { 00:23:28.838 "method": "bdev_set_options", 00:23:28.838 "params": { 00:23:28.838 "bdev_io_pool_size": 65535, 00:23:28.838 "bdev_io_cache_size": 256, 00:23:28.838 "bdev_auto_examine": true, 00:23:28.838 "iobuf_small_cache_size": 128, 00:23:28.838 "iobuf_large_cache_size": 16 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "bdev_raid_set_options", 00:23:28.838 "params": { 00:23:28.838 "process_window_size_kb": 1024 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "bdev_iscsi_set_options", 00:23:28.838 "params": { 00:23:28.838 "timeout_sec": 30 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "bdev_nvme_set_options", 00:23:28.838 "params": { 00:23:28.838 "action_on_timeout": "none", 00:23:28.838 "timeout_us": 0, 00:23:28.838 "timeout_admin_us": 0, 00:23:28.838 "keep_alive_timeout_ms": 10000, 00:23:28.838 "arbitration_burst": 0, 00:23:28.838 "low_priority_weight": 0, 00:23:28.838 "medium_priority_weight": 0, 00:23:28.838 "high_priority_weight": 0, 00:23:28.838 "nvme_adminq_poll_period_us": 10000, 00:23:28.838 "nvme_ioq_poll_period_us": 0, 00:23:28.838 "io_queue_requests": 512, 00:23:28.838 "delay_cmd_submit": true, 00:23:28.838 "transport_retry_count": 4, 00:23:28.838 "bdev_retry_count": 3, 00:23:28.838 "transport_ack_timeout": 0, 00:23:28.838 "ctrlr_loss_timeout_sec": 0, 00:23:28.838 "reconnect_delay_sec": 0, 00:23:28.838 "fast_io_fail_timeout_sec": 0, 00:23:28.838 "disable_auto_failback": false, 00:23:28.838 "generate_uuids": false, 00:23:28.838 "transport_tos": 0, 00:23:28.838 "nvme_error_stat": false, 00:23:28.838 "rdma_srq_size": 0, 00:23:28.838 "io_path_stat": false, 00:23:28.838 "allow_accel_sequence": false, 00:23:28.838 "rdma_max_cq_size": 0, 00:23:28.838 "rdma_cm_event_timeout_ms": 0, 00:23:28.838 "dhchap_digests": [ 00:23:28.838 "sha256", 00:23:28.838 "sha384", 00:23:28.838 "sha512" 00:23:28.838 ], 00:23:28.838 "dhchap_dhgroups": [ 00:23:28.838 "null", 00:23:28.838 "ffdhe2048", 00:23:28.838 "ffdhe3072", 00:23:28.838 "ffdhe4096", 00:23:28.838 "ffdhe6144", 00:23:28.838 "ffdhe8192" 00:23:28.838 ] 00:23:28.838 } 00:23:28.838 }, 00:23:28.838 { 00:23:28.838 "method": "bdev_nvme_attach_controller", 00:23:28.838 "params": { 00:23:28.838 "name": "nvme0", 00:23:28.839 "trtype": "TCP", 00:23:28.839 "adrfam": "IPv4", 00:23:28.839 "traddr": "10.0.0.2", 00:23:28.839 "trsvcid": "4420", 00:23:28.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.839 "prchk_reftag": false, 00:23:28.839 "prchk_guard": false, 00:23:28.839 "ctrlr_loss_timeout_sec": 0, 00:23:28.839 "reconnect_delay_sec": 0, 00:23:28.839 "fast_io_fail_timeout_sec": 0, 00:23:28.839 "psk": "key0", 00:23:28.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.839 "hdgst": false, 00:23:28.839 "ddgst": false 00:23:28.839 } 00:23:28.839 }, 00:23:28.839 { 00:23:28.839 "method": "bdev_nvme_set_hotplug", 00:23:28.839 "params": { 00:23:28.839 "period_us": 100000, 00:23:28.839 "enable": false 00:23:28.839 } 00:23:28.839 }, 00:23:28.839 { 00:23:28.839 "method": "bdev_enable_histogram", 00:23:28.839 "params": { 00:23:28.839 "name": "nvme0n1", 00:23:28.839 "enable": true 00:23:28.839 } 00:23:28.839 }, 00:23:28.839 { 00:23:28.839 "method": "bdev_wait_for_examine" 00:23:28.839 } 00:23:28.839 ] 00:23:28.839 }, 00:23:28.839 { 00:23:28.839 "subsystem": "nbd", 00:23:28.839 "config": [] 00:23:28.839 } 00:23:28.839 ] 00:23:28.839 }' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 830710 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 830710 ']' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 830710 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 830710 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 830710' 00:23:28.839 killing process with pid 830710 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 830710 00:23:28.839 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.839 00:23:28.839 Latency(us) 00:23:28.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.839 =================================================================================================================== 00:23:28.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 830710 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 830683 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 830683 ']' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 830683 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.839 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 830683 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 830683' 00:23:29.098 killing process with pid 830683 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 830683 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 830683 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.098 00:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:29.098 "subsystems": [ 00:23:29.098 { 00:23:29.098 "subsystem": "keyring", 00:23:29.098 "config": [ 00:23:29.098 { 00:23:29.098 "method": "keyring_file_add_key", 00:23:29.098 "params": { 00:23:29.098 "name": "key0", 00:23:29.098 "path": "/tmp/tmp.RLP3HbYrEz" 00:23:29.098 } 00:23:29.098 } 00:23:29.098 ] 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "subsystem": "iobuf", 00:23:29.098 "config": [ 00:23:29.098 { 00:23:29.098 "method": "iobuf_set_options", 00:23:29.098 "params": { 00:23:29.098 "small_pool_count": 8192, 00:23:29.098 "large_pool_count": 1024, 00:23:29.098 "small_bufsize": 8192, 00:23:29.098 "large_bufsize": 135168 00:23:29.098 } 00:23:29.098 } 00:23:29.098 ] 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "subsystem": "sock", 00:23:29.098 "config": [ 00:23:29.098 { 00:23:29.098 "method": "sock_set_default_impl", 00:23:29.098 "params": { 00:23:29.098 "impl_name": "posix" 00:23:29.098 } 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "method": "sock_impl_set_options", 00:23:29.098 "params": { 00:23:29.098 "impl_name": "ssl", 00:23:29.098 "recv_buf_size": 4096, 00:23:29.098 "send_buf_size": 4096, 00:23:29.098 "enable_recv_pipe": true, 00:23:29.098 "enable_quickack": false, 00:23:29.098 "enable_placement_id": 0, 00:23:29.098 "enable_zerocopy_send_server": true, 00:23:29.098 "enable_zerocopy_send_client": false, 00:23:29.098 "zerocopy_threshold": 0, 00:23:29.098 "tls_version": 0, 00:23:29.098 "enable_ktls": false 00:23:29.098 } 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "method": "sock_impl_set_options", 00:23:29.098 "params": { 00:23:29.098 "impl_name": "posix", 00:23:29.098 "recv_buf_size": 2097152, 00:23:29.098 "send_buf_size": 2097152, 00:23:29.098 "enable_recv_pipe": true, 00:23:29.098 "enable_quickack": false, 00:23:29.098 "enable_placement_id": 0, 00:23:29.098 "enable_zerocopy_send_server": true, 00:23:29.098 "enable_zerocopy_send_client": false, 00:23:29.098 "zerocopy_threshold": 0, 00:23:29.098 "tls_version": 0, 00:23:29.098 "enable_ktls": false 00:23:29.098 } 00:23:29.098 } 00:23:29.098 ] 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "subsystem": "vmd", 00:23:29.098 "config": [] 00:23:29.098 }, 00:23:29.098 { 00:23:29.098 "subsystem": "accel", 00:23:29.098 "config": [ 00:23:29.098 { 00:23:29.098 "method": "accel_set_options", 00:23:29.098 "params": { 00:23:29.098 "small_cache_size": 128, 00:23:29.098 "large_cache_size": 16, 00:23:29.098 "task_count": 2048, 00:23:29.098 "sequence_count": 2048, 00:23:29.098 "buf_count": 2048 00:23:29.098 } 00:23:29.098 } 00:23:29.098 ] 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "subsystem": "bdev", 00:23:29.099 "config": [ 00:23:29.099 { 00:23:29.099 "method": "bdev_set_options", 00:23:29.099 "params": { 00:23:29.099 "bdev_io_pool_size": 65535, 00:23:29.099 "bdev_io_cache_size": 256, 00:23:29.099 "bdev_auto_examine": true, 00:23:29.099 "iobuf_small_cache_size": 128, 00:23:29.099 "iobuf_large_cache_size": 16 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_raid_set_options", 00:23:29.099 "params": { 00:23:29.099 "process_window_size_kb": 1024 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_iscsi_set_options", 00:23:29.099 "params": { 00:23:29.099 "timeout_sec": 30 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_nvme_set_options", 00:23:29.099 "params": { 00:23:29.099 "action_on_timeout": "none", 00:23:29.099 "timeout_us": 0, 00:23:29.099 "timeout_admin_us": 0, 00:23:29.099 "keep_alive_timeout_ms": 10000, 00:23:29.099 "arbitration_burst": 0, 00:23:29.099 "low_priority_weight": 0, 00:23:29.099 "medium_priority_weight": 0, 00:23:29.099 "high_priority_weight": 0, 00:23:29.099 "nvme_adminq_poll_period_us": 10000, 00:23:29.099 "nvme_ioq_poll_period_us": 0, 00:23:29.099 "io_queue_requests": 0, 00:23:29.099 "delay_cmd_submit": true, 00:23:29.099 "transport_retry_count": 4, 00:23:29.099 "bdev_retry_count": 3, 00:23:29.099 "transport_ack_timeout": 0, 00:23:29.099 "ctrlr_loss_timeout_sec": 0, 00:23:29.099 "reconnect_delay_sec": 0, 00:23:29.099 "fast_io_fail_timeout_sec": 0, 00:23:29.099 "disable_auto_failback": false, 00:23:29.099 "generate_uuids": false, 00:23:29.099 "transport_tos": 0, 00:23:29.099 "nvme_error_stat": false, 00:23:29.099 "rdma_srq_size": 0, 00:23:29.099 "io_path_stat": false, 00:23:29.099 "allow_accel_sequence": false, 00:23:29.099 "rdma_max_cq_size": 0, 00:23:29.099 "rdma_cm_event_timeout_ms": 0, 00:23:29.099 "dhchap_digests": [ 00:23:29.099 "sha256", 00:23:29.099 "sha384", 00:23:29.099 "sha512" 00:23:29.099 ], 00:23:29.099 "dhchap_dhgroups": [ 00:23:29.099 "null", 00:23:29.099 "ffdhe2048", 00:23:29.099 "ffdhe3072", 00:23:29.099 "ffdhe4096", 00:23:29.099 "ffdhe6144", 00:23:29.099 "ffdhe8192" 00:23:29.099 ] 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_nvme_set_hotplug", 00:23:29.099 "params": { 00:23:29.099 "period_us": 100000, 00:23:29.099 "enable": false 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_malloc_create", 00:23:29.099 "params": { 00:23:29.099 "name": "malloc0", 00:23:29.099 "num_blocks": 8192, 00:23:29.099 "block_size": 4096, 00:23:29.099 "physical_block_size": 4096, 00:23:29.099 "uuid": "c8b4b3cb-29e1-4123-b1b2-4f87922ef8c9", 00:23:29.099 "optimal_io_boundary": 0 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "bdev_wait_for_examine" 00:23:29.099 } 00:23:29.099 ] 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "subsystem": "nbd", 00:23:29.099 "config": [] 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "subsystem": "scheduler", 00:23:29.099 "config": [ 00:23:29.099 { 00:23:29.099 "method": "framework_set_scheduler", 00:23:29.099 "params": { 00:23:29.099 "name": "static" 00:23:29.099 } 00:23:29.099 } 00:23:29.099 ] 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "subsystem": "nvmf", 00:23:29.099 "config": [ 00:23:29.099 { 00:23:29.099 "method": "nvmf_set_config", 00:23:29.099 "params": { 00:23:29.099 "discovery_filter": "match_any", 00:23:29.099 "admin_cmd_passthru": { 00:23:29.099 "identify_ctrlr": false 00:23:29.099 } 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_set_max_subsystems", 00:23:29.099 "params": { 00:23:29.099 "max_subsystems": 1024 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_set_crdt", 00:23:29.099 "params": { 00:23:29.099 "crdt1": 0, 00:23:29.099 "crdt2": 0, 00:23:29.099 "crdt3": 0 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_create_transport", 00:23:29.099 "params": { 00:23:29.099 "trtype": "TCP", 00:23:29.099 "max_queue_depth": 128, 00:23:29.099 "max_io_qpairs_per_ctrlr": 127, 00:23:29.099 "in_capsule_data_size": 4096, 00:23:29.099 "max_io_size": 131072, 00:23:29.099 "io_unit_size": 131072, 00:23:29.099 "max_aq_depth": 128, 00:23:29.099 "num_shared_buffers": 511, 00:23:29.099 "buf_cache_size": 4294967295, 00:23:29.099 "dif_insert_or_strip": false, 00:23:29.099 "zcopy": false, 00:23:29.099 "c2h_success": false, 00:23:29.099 "sock_priority": 0, 00:23:29.099 "abort_timeout_sec": 1, 00:23:29.099 "ack_timeout": 0, 00:23:29.099 "data_wr_pool_size": 0 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_create_subsystem", 00:23:29.099 "params": { 00:23:29.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.099 "allow_any_host": false, 00:23:29.099 "serial_number": "00000000000000000000", 00:23:29.099 "model_number": "SPDK bdev Controller", 00:23:29.099 "max_namespaces": 32, 00:23:29.099 "min_cntlid": 1, 00:23:29.099 "max_cntlid": 65519, 00:23:29.099 "ana_reporting": false 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_subsystem_add_host", 00:23:29.099 "params": { 00:23:29.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.099 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.099 "psk": "key0" 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_subsystem_add_ns", 00:23:29.099 "params": { 00:23:29.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.099 "namespace": { 00:23:29.099 "nsid": 1, 00:23:29.099 "bdev_name": "malloc0", 00:23:29.099 "nguid": "C8B4B3CB29E14123B1B24F87922EF8C9", 00:23:29.099 "uuid": "c8b4b3cb-29e1-4123-b1b2-4f87922ef8c9", 00:23:29.099 "no_auto_visible": false 00:23:29.099 } 00:23:29.099 } 00:23:29.099 }, 00:23:29.099 { 00:23:29.099 "method": "nvmf_subsystem_add_listener", 00:23:29.099 "params": { 00:23:29.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.099 "listen_address": { 00:23:29.099 "trtype": "TCP", 00:23:29.099 "adrfam": "IPv4", 00:23:29.099 "traddr": "10.0.0.2", 00:23:29.099 "trsvcid": "4420" 00:23:29.099 }, 00:23:29.099 "secure_channel": true 00:23:29.099 } 00:23:29.099 } 00:23:29.099 ] 00:23:29.099 } 00:23:29.099 ] 00:23:29.099 }' 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=831115 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 831115 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 831115 ']' 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.099 00:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.357 [2024-07-25 00:03:34.842717] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:29.357 [2024-07-25 00:03:34.842807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.357 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.357 [2024-07-25 00:03:34.908941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.357 [2024-07-25 00:03:35.002036] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.357 [2024-07-25 00:03:35.002108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.357 [2024-07-25 00:03:35.002127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.357 [2024-07-25 00:03:35.002140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.357 [2024-07-25 00:03:35.002151] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.357 [2024-07-25 00:03:35.002240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.615 [2024-07-25 00:03:35.251303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.615 [2024-07-25 00:03:35.283318] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.615 [2024-07-25 00:03:35.291329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=831268 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 831268 /var/tmp/bdevperf.sock 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 831268 ']' 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.181 00:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:30.181 "subsystems": [ 00:23:30.181 { 00:23:30.181 "subsystem": "keyring", 00:23:30.181 "config": [ 00:23:30.181 { 00:23:30.181 "method": "keyring_file_add_key", 00:23:30.181 "params": { 00:23:30.181 "name": "key0", 00:23:30.181 "path": "/tmp/tmp.RLP3HbYrEz" 00:23:30.181 } 00:23:30.181 } 00:23:30.181 ] 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "subsystem": "iobuf", 00:23:30.181 "config": [ 00:23:30.181 { 00:23:30.181 "method": "iobuf_set_options", 00:23:30.181 "params": { 00:23:30.181 "small_pool_count": 8192, 00:23:30.181 "large_pool_count": 1024, 00:23:30.181 "small_bufsize": 8192, 00:23:30.181 "large_bufsize": 135168 00:23:30.181 } 00:23:30.181 } 00:23:30.181 ] 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "subsystem": "sock", 00:23:30.181 "config": [ 00:23:30.181 { 00:23:30.181 "method": "sock_set_default_impl", 00:23:30.181 "params": { 00:23:30.181 "impl_name": "posix" 00:23:30.181 } 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "method": "sock_impl_set_options", 00:23:30.181 "params": { 00:23:30.181 "impl_name": "ssl", 00:23:30.181 "recv_buf_size": 4096, 00:23:30.181 "send_buf_size": 4096, 00:23:30.181 "enable_recv_pipe": true, 00:23:30.181 "enable_quickack": false, 00:23:30.181 "enable_placement_id": 0, 00:23:30.181 "enable_zerocopy_send_server": true, 00:23:30.181 "enable_zerocopy_send_client": false, 00:23:30.181 "zerocopy_threshold": 0, 00:23:30.181 "tls_version": 0, 00:23:30.181 "enable_ktls": false 00:23:30.181 } 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "method": "sock_impl_set_options", 00:23:30.181 "params": { 00:23:30.181 "impl_name": "posix", 00:23:30.181 "recv_buf_size": 2097152, 00:23:30.181 "send_buf_size": 2097152, 00:23:30.181 "enable_recv_pipe": true, 00:23:30.181 "enable_quickack": false, 00:23:30.181 "enable_placement_id": 0, 00:23:30.181 "enable_zerocopy_send_server": true, 00:23:30.181 "enable_zerocopy_send_client": false, 00:23:30.181 "zerocopy_threshold": 0, 00:23:30.181 "tls_version": 0, 00:23:30.181 "enable_ktls": false 00:23:30.181 } 00:23:30.181 } 00:23:30.181 ] 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "subsystem": "vmd", 00:23:30.181 "config": [] 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "subsystem": "accel", 00:23:30.181 "config": [ 00:23:30.181 { 00:23:30.181 "method": "accel_set_options", 00:23:30.181 "params": { 00:23:30.181 "small_cache_size": 128, 00:23:30.181 "large_cache_size": 16, 00:23:30.181 "task_count": 2048, 00:23:30.181 "sequence_count": 2048, 00:23:30.181 "buf_count": 2048 00:23:30.181 } 00:23:30.181 } 00:23:30.181 ] 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "subsystem": "bdev", 00:23:30.181 "config": [ 00:23:30.181 { 00:23:30.181 "method": "bdev_set_options", 00:23:30.181 "params": { 00:23:30.181 "bdev_io_pool_size": 65535, 00:23:30.181 "bdev_io_cache_size": 256, 00:23:30.181 "bdev_auto_examine": true, 00:23:30.181 "iobuf_small_cache_size": 128, 00:23:30.181 "iobuf_large_cache_size": 16 00:23:30.181 } 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "method": "bdev_raid_set_options", 00:23:30.181 "params": { 00:23:30.181 "process_window_size_kb": 1024 00:23:30.181 } 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "method": "bdev_iscsi_set_options", 00:23:30.181 "params": { 00:23:30.181 "timeout_sec": 30 00:23:30.181 } 00:23:30.181 }, 00:23:30.181 { 00:23:30.181 "method": "bdev_nvme_set_options", 00:23:30.181 "params": { 00:23:30.181 "action_on_timeout": "none", 00:23:30.181 "timeout_us": 0, 00:23:30.181 "timeout_admin_us": 0, 00:23:30.181 "keep_alive_timeout_ms": 10000, 00:23:30.181 "arbitration_burst": 0, 00:23:30.181 "low_priority_weight": 0, 00:23:30.181 "medium_priority_weight": 0, 00:23:30.181 "high_priority_weight": 0, 00:23:30.181 "nvme_adminq_poll_period_us": 10000, 00:23:30.181 "nvme_ioq_poll_period_us": 0, 00:23:30.181 "io_queue_requests": 512, 00:23:30.181 "delay_cmd_submit": true, 00:23:30.181 "transport_retry_count": 4, 00:23:30.181 "bdev_retry_count": 3, 00:23:30.181 "transport_ack_timeout": 0, 00:23:30.181 "ctrlr_loss_timeout_sec": 0, 00:23:30.181 "reconnect_delay_sec": 0, 00:23:30.181 "fast_io_fail_timeout_sec": 0, 00:23:30.181 "disable_auto_failback": false, 00:23:30.181 "generate_uuids": false, 00:23:30.181 "transport_tos": 0, 00:23:30.181 "nvme_error_stat": false, 00:23:30.181 "rdma_srq_size": 0, 00:23:30.181 "io_path_stat": false, 00:23:30.181 "allow_accel_sequence": false, 00:23:30.181 "rdma_max_cq_size": 0, 00:23:30.182 "rdma_cm_event_timeout_ms": 0, 00:23:30.182 "dhchap_digests": [ 00:23:30.182 "sha256", 00:23:30.182 "sha384", 00:23:30.182 "sha512" 00:23:30.182 ], 00:23:30.182 "dhchap_dhgroups": [ 00:23:30.182 "null", 00:23:30.182 "ffdhe2048", 00:23:30.182 "ffdhe3072", 00:23:30.182 "ffdhe4096", 00:23:30.182 "ffdhe6144", 00:23:30.182 "ffdhe8192" 00:23:30.182 ] 00:23:30.182 } 00:23:30.182 }, 00:23:30.182 { 00:23:30.182 "method": "bdev_nvme_attach_controller", 00:23:30.182 "params": { 00:23:30.182 "name": "nvme0", 00:23:30.182 "trtype": "TCP", 00:23:30.182 "adrfam": "IPv4", 00:23:30.182 "traddr": "10.0.0.2", 00:23:30.182 "trsvcid": "4420", 00:23:30.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.182 "prchk_reftag": false, 00:23:30.182 "prchk_guard": false, 00:23:30.182 "ctrlr_loss_timeout_sec": 0, 00:23:30.182 "reconnect_delay_sec": 0, 00:23:30.182 "fast_io_fail_timeout_sec": 0, 00:23:30.182 "psk": "key0", 00:23:30.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.182 "hdgst": false, 00:23:30.182 "ddgst": false 00:23:30.182 } 00:23:30.182 }, 00:23:30.182 { 00:23:30.182 "method": "bdev_nvme_set_hotplug", 00:23:30.182 "params": { 00:23:30.182 "period_us": 100000, 00:23:30.182 "enable": false 00:23:30.182 } 00:23:30.182 }, 00:23:30.182 { 00:23:30.182 "method": "bdev_enable_histogram", 00:23:30.182 "params": { 00:23:30.182 "name": "nvme0n1", 00:23:30.182 "enable": true 00:23:30.182 } 00:23:30.182 }, 00:23:30.182 { 00:23:30.182 "method": "bdev_wait_for_examine" 00:23:30.182 } 00:23:30.182 ] 00:23:30.182 }, 00:23:30.182 { 00:23:30.182 "subsystem": "nbd", 00:23:30.182 "config": [] 00:23:30.182 } 00:23:30.182 ] 00:23:30.182 }' 00:23:30.182 00:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.182 [2024-07-25 00:03:35.881440] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:30.182 [2024-07-25 00:03:35.881520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831268 ] 00:23:30.440 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.440 [2024-07-25 00:03:35.939129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.440 [2024-07-25 00:03:36.026930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.697 [2024-07-25 00:03:36.202284] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.262 00:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.262 00:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.262 00:03:36 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.262 00:03:36 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:31.519 00:03:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.519 00:03:37 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.519 Running I/O for 1 seconds... 00:23:32.891 00:23:32.891 Latency(us) 00:23:32.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.891 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.891 Verification LBA range: start 0x0 length 0x2000 00:23:32.891 nvme0n1 : 1.04 2450.39 9.57 0.00 0.00 51275.36 6262.33 83109.36 00:23:32.891 =================================================================================================================== 00:23:32.891 Total : 2450.39 9.57 0.00 0.00 51275.36 6262.33 83109.36 00:23:32.891 0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:32.891 nvmf_trace.0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 831268 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 831268 ']' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 831268 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 831268 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 831268' 00:23:32.891 killing process with pid 831268 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 831268 00:23:32.891 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.891 00:23:32.891 Latency(us) 00:23:32.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.891 =================================================================================================================== 00:23:32.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 831268 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.891 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.891 rmmod nvme_tcp 00:23:33.149 rmmod nvme_fabrics 00:23:33.149 rmmod nvme_keyring 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 831115 ']' 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 831115 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 831115 ']' 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 831115 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 831115 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 831115' 00:23:33.149 killing process with pid 831115 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 831115 00:23:33.149 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 831115 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.406 00:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.304 00:03:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.304 00:03:40 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9uqP2rmPRL /tmp/tmp.j5DOv4pMqD /tmp/tmp.RLP3HbYrEz 00:23:35.304 00:23:35.304 real 1m18.386s 00:23:35.304 user 2m1.253s 00:23:35.304 sys 0m29.745s 00:23:35.304 00:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:35.304 00:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.304 ************************************ 00:23:35.304 END TEST nvmf_tls 00:23:35.304 ************************************ 00:23:35.562 00:03:41 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:35.562 00:03:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:35.562 00:03:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:35.562 00:03:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 ************************************ 00:23:35.562 START TEST nvmf_fips 00:23:35.562 ************************************ 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:35.562 * Looking for test storage... 00:23:35.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.562 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:35.563 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:35.563 Error setting digest 00:23:35.563 00A255764A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:35.564 00A255764A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.564 00:03:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:37.463 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.463 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:37.464 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:37.464 Found net devices under 0000:09:00.0: cvl_0_0 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:37.464 Found net devices under 0000:09:00.1: cvl_0_1 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.464 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:37.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:23:37.722 00:23:37.722 --- 10.0.0.2 ping statistics --- 00:23:37.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.722 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:37.722 00:23:37.722 --- 10.0.0.1 ping statistics --- 00:23:37.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.722 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=833513 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 833513 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 833513 ']' 00:23:37.722 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.723 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.723 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.723 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.723 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.723 [2024-07-25 00:03:43.366681] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:37.723 [2024-07-25 00:03:43.366760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.723 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.723 [2024-07-25 00:03:43.433115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.980 [2024-07-25 00:03:43.525490] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.980 [2024-07-25 00:03:43.525551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.980 [2024-07-25 00:03:43.525567] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.980 [2024-07-25 00:03:43.525580] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.980 [2024-07-25 00:03:43.525591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.980 [2024-07-25 00:03:43.525628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:37.980 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.238 [2024-07-25 00:03:43.888511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.238 [2024-07-25 00:03:43.904504] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.238 [2024-07-25 00:03:43.904724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.238 [2024-07-25 00:03:43.937266] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.238 malloc0 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=833653 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 833653 /var/tmp/bdevperf.sock 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 833653 ']' 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.495 00:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.495 [2024-07-25 00:03:44.023788] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:38.495 [2024-07-25 00:03:44.023869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833653 ] 00:23:38.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.495 [2024-07-25 00:03:44.082273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.495 [2024-07-25 00:03:44.167178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.753 00:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.753 00:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:38.753 00:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.010 [2024-07-25 00:03:44.499548] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.010 [2024-07-25 00:03:44.499674] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.010 TLSTESTn1 00:23:39.010 00:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.010 Running I/O for 10 seconds... 00:23:51.202 00:23:51.202 Latency(us) 00:23:51.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.202 Verification LBA range: start 0x0 length 0x2000 00:23:51.202 TLSTESTn1 : 10.05 2632.17 10.28 0.00 0.00 48494.81 6262.33 69905.07 00:23:51.202 =================================================================================================================== 00:23:51.202 Total : 2632.17 10.28 0.00 0.00 48494.81 6262.33 69905.07 00:23:51.202 0 00:23:51.202 00:03:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:51.202 00:03:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:51.202 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:51.202 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:51.203 nvmf_trace.0 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 833653 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 833653 ']' 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 833653 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 833653 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 833653' 00:23:51.203 killing process with pid 833653 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 833653 00:23:51.203 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.203 00:23:51.203 Latency(us) 00:23:51.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.203 =================================================================================================================== 00:23:51.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.203 [2024-07-25 00:03:54.900200] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:51.203 00:03:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 833653 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.203 rmmod nvme_tcp 00:23:51.203 rmmod nvme_fabrics 00:23:51.203 rmmod nvme_keyring 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 833513 ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 833513 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 833513 ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 833513 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 833513 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 833513' 00:23:51.203 killing process with pid 833513 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 833513 00:23:51.203 [2024-07-25 00:03:55.213672] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 833513 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.203 00:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.136 00:23:52.136 real 0m16.467s 00:23:52.136 user 0m17.192s 00:23:52.136 sys 0m7.810s 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.136 ************************************ 00:23:52.136 END TEST nvmf_fips 00:23:52.136 ************************************ 00:23:52.136 00:03:57 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:52.136 00:03:57 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:52.136 00:03:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:52.136 00:03:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:52.136 00:03:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.136 ************************************ 00:23:52.136 START TEST nvmf_fuzz 00:23:52.136 ************************************ 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:52.136 * Looking for test storage... 00:23:52.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.136 00:03:57 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.137 00:03:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.051 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:54.052 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:54.052 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:54.052 Found net devices under 0000:09:00.0: cvl_0_0 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:54.052 Found net devices under 0000:09:00.1: cvl_0_1 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.052 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:23:54.310 00:23:54.310 --- 10.0.0.2 ping statistics --- 00:23:54.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.310 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:54.310 00:23:54.310 --- 10.0.0.1 ping statistics --- 00:23:54.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.310 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=836903 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 836903 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 836903 ']' 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.310 00:03:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 Malloc0 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:54.568 00:04:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:26.627 Fuzzing completed. Shutting down the fuzz application 00:24:26.627 00:24:26.627 Dumping successful admin opcodes: 00:24:26.627 8, 9, 10, 24, 00:24:26.627 Dumping successful io opcodes: 00:24:26.627 0, 9, 00:24:26.627 NS: 0x200003aeff00 I/O qp, Total commands completed: 461018, total successful commands: 2667, random_seed: 1395717312 00:24:26.627 NS: 0x200003aeff00 admin qp, Total commands completed: 57136, total successful commands: 455, random_seed: 1925382144 00:24:26.627 00:04:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:26.627 Fuzzing completed. Shutting down the fuzz application 00:24:26.627 00:24:26.627 Dumping successful admin opcodes: 00:24:26.627 24, 00:24:26.627 Dumping successful io opcodes: 00:24:26.627 00:24:26.627 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2533512997 00:24:26.627 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2533706720 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.627 00:04:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.627 rmmod nvme_tcp 00:24:26.627 rmmod nvme_fabrics 00:24:26.627 rmmod nvme_keyring 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 836903 ']' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 836903 ']' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 836903' 00:24:26.627 killing process with pid 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 836903 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.627 00:04:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.158 00:04:34 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.158 00:04:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:29.158 00:24:29.158 real 0m36.832s 00:24:29.158 user 0m50.785s 00:24:29.158 sys 0m15.434s 00:24:29.158 00:04:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:29.158 00:04:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:29.158 ************************************ 00:24:29.158 END TEST nvmf_fuzz 00:24:29.158 ************************************ 00:24:29.158 00:04:34 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:29.158 00:04:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:29.158 00:04:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:29.158 00:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.158 ************************************ 00:24:29.158 START TEST nvmf_multiconnection 00:24:29.158 ************************************ 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:29.158 * Looking for test storage... 00:24:29.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.158 00:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:31.058 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:31.058 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:31.058 Found net devices under 0000:09:00.0: cvl_0_0 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:31.058 Found net devices under 0000:09:00.1: cvl_0_1 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.058 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:24:31.059 00:24:31.059 --- 10.0.0.2 ping statistics --- 00:24:31.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.059 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:24:31.059 00:24:31.059 --- 10.0.0.1 ping statistics --- 00:24:31.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.059 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=842506 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 842506 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 842506 ']' 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.059 00:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.317 [2024-07-25 00:04:36.774588] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:31.317 [2024-07-25 00:04:36.774655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.317 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.317 [2024-07-25 00:04:36.845540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.317 [2024-07-25 00:04:36.940021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.317 [2024-07-25 00:04:36.940073] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.317 [2024-07-25 00:04:36.940112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.317 [2024-07-25 00:04:36.940145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.317 [2024-07-25 00:04:36.940159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.317 [2024-07-25 00:04:36.940220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.317 [2024-07-25 00:04:36.940285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.317 [2024-07-25 00:04:36.940353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.317 [2024-07-25 00:04:36.940359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 [2024-07-25 00:04:37.108205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 Malloc1 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 [2024-07-25 00:04:37.167846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 Malloc2 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 Malloc3 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.575 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.833 Malloc4 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.833 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 Malloc5 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 Malloc6 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 Malloc7 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 Malloc8 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.834 Malloc9 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.834 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 Malloc10 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 Malloc11 00:24:32.092 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.093 00:04:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:32.657 00:04:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:32.657 00:04:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:32.657 00:04:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.657 00:04:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:32.657 00:04:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.181 00:04:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:35.439 00:04:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:35.439 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.439 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.439 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.439 00:04:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.337 00:04:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:37.902 00:04:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:37.902 00:04:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.902 00:04:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.902 00:04:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.902 00:04:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.428 00:04:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:40.685 00:04:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:40.685 00:04:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:40.685 00:04:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.685 00:04:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:40.685 00:04:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.210 00:04:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:43.467 00:04:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:43.467 00:04:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.467 00:04:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.467 00:04:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.467 00:04:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.393 00:04:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:46.327 00:04:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:46.327 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:46.327 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.327 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:46.327 00:04:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.226 00:04:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:49.157 00:04:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:49.157 00:04:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:49.157 00:04:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.157 00:04:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:49.157 00:04:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.054 00:04:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:51.985 00:04:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:51.985 00:04:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:51.985 00:04:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.985 00:04:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:51.985 00:04:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.881 00:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:54.814 00:05:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:54.814 00:05:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:54.814 00:05:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.814 00:05:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:54.814 00:05:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.713 00:05:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:57.645 00:05:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:57.645 00:05:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.645 00:05:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.645 00:05:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.645 00:05:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.542 00:05:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:00.475 00:05:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:00.475 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.475 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.475 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.475 00:05:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:02.372 00:05:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:02.372 00:05:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:02.372 00:05:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:02.372 00:05:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:02.372 00:05:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.373 00:05:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:02.373 00:05:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:02.373 [global] 00:25:02.373 thread=1 00:25:02.373 invalidate=1 00:25:02.373 rw=read 00:25:02.373 time_based=1 00:25:02.373 runtime=10 00:25:02.373 ioengine=libaio 00:25:02.373 direct=1 00:25:02.373 bs=262144 00:25:02.373 iodepth=64 00:25:02.373 norandommap=1 00:25:02.373 numjobs=1 00:25:02.373 00:25:02.373 [job0] 00:25:02.373 filename=/dev/nvme0n1 00:25:02.373 [job1] 00:25:02.373 filename=/dev/nvme10n1 00:25:02.373 [job2] 00:25:02.373 filename=/dev/nvme1n1 00:25:02.373 [job3] 00:25:02.373 filename=/dev/nvme2n1 00:25:02.373 [job4] 00:25:02.373 filename=/dev/nvme3n1 00:25:02.373 [job5] 00:25:02.373 filename=/dev/nvme4n1 00:25:02.373 [job6] 00:25:02.373 filename=/dev/nvme5n1 00:25:02.373 [job7] 00:25:02.373 filename=/dev/nvme6n1 00:25:02.373 [job8] 00:25:02.373 filename=/dev/nvme7n1 00:25:02.373 [job9] 00:25:02.373 filename=/dev/nvme8n1 00:25:02.373 [job10] 00:25:02.373 filename=/dev/nvme9n1 00:25:02.631 Could not set queue depth (nvme0n1) 00:25:02.631 Could not set queue depth (nvme10n1) 00:25:02.631 Could not set queue depth (nvme1n1) 00:25:02.631 Could not set queue depth (nvme2n1) 00:25:02.631 Could not set queue depth (nvme3n1) 00:25:02.631 Could not set queue depth (nvme4n1) 00:25:02.631 Could not set queue depth (nvme5n1) 00:25:02.631 Could not set queue depth (nvme6n1) 00:25:02.631 Could not set queue depth (nvme7n1) 00:25:02.631 Could not set queue depth (nvme8n1) 00:25:02.631 Could not set queue depth (nvme9n1) 00:25:02.631 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.631 fio-3.35 00:25:02.631 Starting 11 threads 00:25:14.839 00:25:14.839 job0: (groupid=0, jobs=1): err= 0: pid=846759: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=624, BW=156MiB/s (164MB/s)(1569MiB/10045msec) 00:25:14.839 slat (usec): min=11, max=103856, avg=1426.02, stdev=4746.52 00:25:14.839 clat (msec): min=5, max=471, avg=100.91, stdev=55.70 00:25:14.839 lat (msec): min=5, max=471, avg=102.34, stdev=56.31 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 59], 00:25:14.839 | 30.00th=[ 69], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 105], 00:25:14.839 | 70.00th=[ 121], 80.00th=[ 134], 90.00th=[ 157], 95.00th=[ 197], 00:25:14.839 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 451], 99.95th=[ 456], 00:25:14.839 | 99.99th=[ 472] 00:25:14.839 bw ( KiB/s): min=50176, max=346624, per=9.56%, avg=159052.80, stdev=72819.91, samples=20 00:25:14.839 iops : min= 196, max= 1354, avg=621.30, stdev=284.45, samples=20 00:25:14.839 lat (msec) : 10=0.22%, 20=0.57%, 50=13.22%, 100=43.96%, 250=39.98% 00:25:14.839 lat (msec) : 500=2.04% 00:25:14.839 cpu : usr=0.36%, sys=2.16%, ctx=1244, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=6276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.839 job1: (groupid=0, jobs=1): err= 0: pid=846760: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=658, BW=165MiB/s (173MB/s)(1655MiB/10046msec) 00:25:14.839 slat (usec): min=9, max=469876, avg=1183.89, stdev=7476.19 00:25:14.839 clat (usec): min=1190, max=683903, avg=95856.40, stdev=71886.82 00:25:14.839 lat (usec): min=1215, max=875435, avg=97040.29, stdev=72858.06 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 63], 00:25:14.839 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 89], 00:25:14.839 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 161], 95.00th=[ 207], 00:25:14.839 | 99.00th=[ 418], 99.50th=[ 531], 99.90th=[ 676], 99.95th=[ 676], 00:25:14.839 | 99.99th=[ 684] 00:25:14.839 bw ( KiB/s): min=39936, max=281088, per=10.08%, avg=167833.60, stdev=65622.81, samples=20 00:25:14.839 iops : min= 156, max= 1098, avg=655.60, stdev=256.34, samples=20 00:25:14.839 lat (msec) : 2=0.29%, 4=0.23%, 10=0.62%, 20=2.42%, 50=10.56% 00:25:14.839 lat (msec) : 100=59.22%, 250=23.19%, 500=2.87%, 750=0.60% 00:25:14.839 cpu : usr=0.38%, sys=2.08%, ctx=1395, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=6619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.839 job2: (groupid=0, jobs=1): err= 0: pid=846763: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=845, BW=211MiB/s (222MB/s)(2128MiB/10069msec) 00:25:14.839 slat (usec): min=9, max=122942, avg=729.89, stdev=3396.41 00:25:14.839 clat (usec): min=1025, max=510605, avg=74916.39, stdev=59833.15 00:25:14.839 lat (usec): min=1048, max=510635, avg=75646.28, stdev=60367.65 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 20], 00:25:14.839 | 30.00th=[ 32], 40.00th=[ 42], 50.00th=[ 63], 60.00th=[ 82], 00:25:14.839 | 70.00th=[ 112], 80.00th=[ 126], 90.00th=[ 148], 95.00th=[ 167], 00:25:14.839 | 99.00th=[ 224], 99.50th=[ 305], 99.90th=[ 506], 99.95th=[ 506], 00:25:14.839 | 99.99th=[ 510] 00:25:14.839 bw ( KiB/s): min=102912, max=500224, per=12.99%, avg=216276.40, stdev=120433.72, samples=20 00:25:14.839 iops : min= 402, max= 1954, avg=844.80, stdev=470.41, samples=20 00:25:14.839 lat (msec) : 2=0.06%, 4=0.43%, 10=5.86%, 20=14.24%, 50=22.13% 00:25:14.839 lat (msec) : 100=21.66%, 250=34.92%, 500=0.51%, 750=0.19% 00:25:14.839 cpu : usr=0.46%, sys=2.55%, ctx=1903, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=8510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.839 job3: (groupid=0, jobs=1): err= 0: pid=846764: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=562, BW=141MiB/s (147MB/s)(1420MiB/10105msec) 00:25:14.839 slat (usec): min=13, max=313252, avg=1563.42, stdev=6938.20 00:25:14.839 clat (msec): min=5, max=527, avg=112.15, stdev=61.33 00:25:14.839 lat (msec): min=5, max=527, avg=113.71, stdev=62.20 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 61], 00:25:14.839 | 30.00th=[ 72], 40.00th=[ 92], 50.00th=[ 108], 60.00th=[ 123], 00:25:14.839 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 186], 95.00th=[ 220], 00:25:14.839 | 99.00th=[ 338], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:25:14.839 | 99.99th=[ 527] 00:25:14.839 bw ( KiB/s): min=69632, max=320000, per=8.64%, avg=143820.80, stdev=62458.01, samples=20 00:25:14.839 iops : min= 272, max= 1250, avg=561.80, stdev=243.98, samples=20 00:25:14.839 lat (msec) : 10=0.18%, 20=1.60%, 50=13.06%, 100=29.52%, 250=53.78% 00:25:14.839 lat (msec) : 500=1.83%, 750=0.04% 00:25:14.839 cpu : usr=0.29%, sys=2.13%, ctx=1182, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=5681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.839 job4: (groupid=0, jobs=1): err= 0: pid=846765: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=553, BW=138MiB/s (145MB/s)(1393MiB/10074msec) 00:25:14.839 slat (usec): min=13, max=139129, avg=1611.56, stdev=6265.03 00:25:14.839 clat (usec): min=1232, max=611163, avg=113981.09, stdev=82072.70 00:25:14.839 lat (usec): min=1249, max=621749, avg=115592.65, stdev=83376.96 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 27], 20.00th=[ 46], 00:25:14.839 | 30.00th=[ 69], 40.00th=[ 87], 50.00th=[ 109], 60.00th=[ 123], 00:25:14.839 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 203], 95.00th=[ 251], 00:25:14.839 | 99.00th=[ 481], 99.50th=[ 567], 99.90th=[ 592], 99.95th=[ 592], 00:25:14.839 | 99.99th=[ 609] 00:25:14.839 bw ( KiB/s): min=32256, max=280576, per=8.47%, avg=141012.50, stdev=72051.88, samples=20 00:25:14.839 iops : min= 126, max= 1096, avg=550.80, stdev=281.48, samples=20 00:25:14.839 lat (msec) : 2=0.14%, 4=0.84%, 10=2.14%, 20=3.66%, 50=16.40% 00:25:14.839 lat (msec) : 100=22.49%, 250=49.17%, 500=4.38%, 750=0.77% 00:25:14.839 cpu : usr=0.35%, sys=1.90%, ctx=1285, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=5572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.839 job5: (groupid=0, jobs=1): err= 0: pid=846766: Thu Jul 25 00:05:18 2024 00:25:14.839 read: IOPS=503, BW=126MiB/s (132MB/s)(1267MiB/10064msec) 00:25:14.839 slat (usec): min=9, max=141466, avg=1525.35, stdev=5571.02 00:25:14.839 clat (msec): min=4, max=333, avg=125.50, stdev=53.94 00:25:14.839 lat (msec): min=4, max=361, avg=127.02, stdev=54.82 00:25:14.839 clat percentiles (msec): 00:25:14.839 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 57], 20.00th=[ 85], 00:25:14.839 | 30.00th=[ 106], 40.00th=[ 117], 50.00th=[ 125], 60.00th=[ 133], 00:25:14.839 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 218], 00:25:14.839 | 99.00th=[ 296], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 334], 00:25:14.839 | 99.99th=[ 334] 00:25:14.839 bw ( KiB/s): min=70144, max=202240, per=7.70%, avg=128076.80, stdev=35121.10, samples=20 00:25:14.839 iops : min= 274, max= 790, avg=500.30, stdev=137.19, samples=20 00:25:14.839 lat (msec) : 10=1.26%, 20=1.91%, 50=5.13%, 100=19.19%, 250=70.39% 00:25:14.839 lat (msec) : 500=2.11% 00:25:14.839 cpu : usr=0.32%, sys=1.73%, ctx=1219, majf=0, minf=4097 00:25:14.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:14.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.839 issued rwts: total=5066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 job6: (groupid=0, jobs=1): err= 0: pid=846768: Thu Jul 25 00:05:18 2024 00:25:14.840 read: IOPS=507, BW=127MiB/s (133MB/s)(1282MiB/10097msec) 00:25:14.840 slat (usec): min=11, max=193871, avg=1607.03, stdev=6382.41 00:25:14.840 clat (msec): min=2, max=594, avg=124.32, stdev=73.38 00:25:14.840 lat (msec): min=2, max=594, avg=125.93, stdev=74.42 00:25:14.840 clat percentiles (msec): 00:25:14.840 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 72], 00:25:14.840 | 30.00th=[ 89], 40.00th=[ 102], 50.00th=[ 115], 60.00th=[ 125], 00:25:14.840 | 70.00th=[ 136], 80.00th=[ 155], 90.00th=[ 201], 95.00th=[ 251], 00:25:14.840 | 99.00th=[ 493], 99.50th=[ 523], 99.90th=[ 550], 99.95th=[ 567], 00:25:14.840 | 99.99th=[ 592] 00:25:14.840 bw ( KiB/s): min=32256, max=212480, per=7.79%, avg=129587.20, stdev=48753.16, samples=20 00:25:14.840 iops : min= 126, max= 830, avg=506.20, stdev=190.44, samples=20 00:25:14.840 lat (msec) : 4=0.08%, 10=0.43%, 20=1.07%, 50=3.98%, 100=33.67% 00:25:14.840 lat (msec) : 250=55.74%, 500=4.21%, 750=0.82% 00:25:14.840 cpu : usr=0.22%, sys=1.93%, ctx=1139, majf=0, minf=4097 00:25:14.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.840 issued rwts: total=5126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 job7: (groupid=0, jobs=1): err= 0: pid=846769: Thu Jul 25 00:05:18 2024 00:25:14.840 read: IOPS=545, BW=136MiB/s (143MB/s)(1379MiB/10106msec) 00:25:14.840 slat (usec): min=9, max=127694, avg=1102.72, stdev=5547.30 00:25:14.840 clat (usec): min=1959, max=590612, avg=116009.31, stdev=78833.14 00:25:14.840 lat (usec): min=1986, max=590650, avg=117112.03, stdev=80091.88 00:25:14.840 clat percentiles (msec): 00:25:14.840 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 58], 00:25:14.840 | 30.00th=[ 72], 40.00th=[ 90], 50.00th=[ 103], 60.00th=[ 117], 00:25:14.840 | 70.00th=[ 133], 80.00th=[ 157], 90.00th=[ 207], 95.00th=[ 257], 00:25:14.840 | 99.00th=[ 460], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 575], 00:25:14.840 | 99.99th=[ 592] 00:25:14.840 bw ( KiB/s): min=33280, max=275456, per=8.39%, avg=139622.40, stdev=63259.21, samples=20 00:25:14.840 iops : min= 130, max= 1076, avg=545.40, stdev=247.11, samples=20 00:25:14.840 lat (msec) : 2=0.02%, 4=0.38%, 10=0.60%, 20=2.86%, 50=12.94% 00:25:14.840 lat (msec) : 100=31.29%, 250=46.53%, 500=4.79%, 750=0.60% 00:25:14.840 cpu : usr=0.35%, sys=1.54%, ctx=1402, majf=0, minf=4097 00:25:14.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.840 issued rwts: total=5517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 job8: (groupid=0, jobs=1): err= 0: pid=846770: Thu Jul 25 00:05:18 2024 00:25:14.840 read: IOPS=691, BW=173MiB/s (181MB/s)(1744MiB/10088msec) 00:25:14.840 slat (usec): min=10, max=182954, avg=1048.42, stdev=4980.63 00:25:14.840 clat (msec): min=2, max=639, avg=91.42, stdev=74.69 00:25:14.840 lat (msec): min=2, max=639, avg=92.47, stdev=75.38 00:25:14.840 clat percentiles (msec): 00:25:14.840 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 43], 00:25:14.840 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 79], 00:25:14.840 | 70.00th=[ 92], 80.00th=[ 125], 90.00th=[ 176], 95.00th=[ 224], 00:25:14.840 | 99.00th=[ 435], 99.50th=[ 518], 99.90th=[ 575], 99.95th=[ 575], 00:25:14.840 | 99.99th=[ 642] 00:25:14.840 bw ( KiB/s): min=49152, max=348160, per=10.63%, avg=176947.20, stdev=89879.49, samples=20 00:25:14.840 iops : min= 192, max= 1360, avg=691.20, stdev=351.09, samples=20 00:25:14.840 lat (msec) : 4=0.20%, 10=2.02%, 20=2.75%, 50=19.11%, 100=49.48% 00:25:14.840 lat (msec) : 250=22.71%, 500=3.01%, 750=0.72% 00:25:14.840 cpu : usr=0.34%, sys=2.16%, ctx=1525, majf=0, minf=3721 00:25:14.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.840 issued rwts: total=6975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 job9: (groupid=0, jobs=1): err= 0: pid=846771: Thu Jul 25 00:05:18 2024 00:25:14.840 read: IOPS=572, BW=143MiB/s (150MB/s)(1441MiB/10076msec) 00:25:14.840 slat (usec): min=11, max=97108, avg=1503.47, stdev=5183.07 00:25:14.840 clat (msec): min=2, max=408, avg=110.24, stdev=67.40 00:25:14.840 lat (msec): min=3, max=414, avg=111.75, stdev=68.27 00:25:14.840 clat percentiles (msec): 00:25:14.840 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 24], 20.00th=[ 36], 00:25:14.840 | 30.00th=[ 75], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 124], 00:25:14.840 | 70.00th=[ 136], 80.00th=[ 155], 90.00th=[ 182], 95.00th=[ 224], 00:25:14.840 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:25:14.840 | 99.99th=[ 409] 00:25:14.840 bw ( KiB/s): min=64512, max=484864, per=8.77%, avg=145985.40, stdev=89180.65, samples=20 00:25:14.840 iops : min= 252, max= 1894, avg=570.25, stdev=348.36, samples=20 00:25:14.840 lat (msec) : 4=0.07%, 10=3.61%, 20=2.01%, 50=19.44%, 100=13.29% 00:25:14.840 lat (msec) : 250=57.81%, 500=3.76% 00:25:14.840 cpu : usr=0.41%, sys=2.00%, ctx=1206, majf=0, minf=4097 00:25:14.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.840 issued rwts: total=5765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 job10: (groupid=0, jobs=1): err= 0: pid=846772: Thu Jul 25 00:05:18 2024 00:25:14.840 read: IOPS=455, BW=114MiB/s (119MB/s)(1149MiB/10097msec) 00:25:14.840 slat (usec): min=11, max=180374, avg=1830.60, stdev=7361.42 00:25:14.840 clat (msec): min=3, max=611, avg=138.64, stdev=76.56 00:25:14.840 lat (msec): min=3, max=611, avg=140.47, stdev=77.83 00:25:14.840 clat percentiles (msec): 00:25:14.840 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 68], 20.00th=[ 90], 00:25:14.840 | 30.00th=[ 102], 40.00th=[ 114], 50.00th=[ 127], 60.00th=[ 136], 00:25:14.840 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 220], 95.00th=[ 271], 00:25:14.840 | 99.00th=[ 498], 99.50th=[ 542], 99.90th=[ 584], 99.95th=[ 584], 00:25:14.840 | 99.99th=[ 609] 00:25:14.840 bw ( KiB/s): min=38400, max=178688, per=6.97%, avg=116019.20, stdev=43884.29, samples=20 00:25:14.840 iops : min= 150, max= 698, avg=453.20, stdev=171.42, samples=20 00:25:14.840 lat (msec) : 4=0.09%, 10=0.46%, 20=0.41%, 50=3.89%, 100=23.80% 00:25:14.840 lat (msec) : 250=65.47%, 500=4.90%, 750=0.98% 00:25:14.840 cpu : usr=0.25%, sys=1.66%, ctx=1105, majf=0, minf=4097 00:25:14.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:14.840 issued rwts: total=4596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:14.840 00:25:14.840 Run status group 0 (all jobs): 00:25:14.840 READ: bw=1625MiB/s (1704MB/s), 114MiB/s-211MiB/s (119MB/s-222MB/s), io=16.0GiB (17.2GB), run=10045-10106msec 00:25:14.840 00:25:14.840 Disk stats (read/write): 00:25:14.840 nvme0n1: ios=12354/0, merge=0/0, ticks=1237598/0, in_queue=1237598, util=97.24% 00:25:14.840 nvme10n1: ios=12989/0, merge=0/0, ticks=1239181/0, in_queue=1239181, util=97.45% 00:25:14.840 nvme1n1: ios=16818/0, merge=0/0, ticks=1239396/0, in_queue=1239396, util=97.71% 00:25:14.840 nvme2n1: ios=11173/0, merge=0/0, ticks=1232191/0, in_queue=1232191, util=97.86% 00:25:14.840 nvme3n1: ios=10939/0, merge=0/0, ticks=1233010/0, in_queue=1233010, util=97.93% 00:25:14.840 nvme4n1: ios=9844/0, merge=0/0, ticks=1240244/0, in_queue=1240244, util=98.26% 00:25:14.840 nvme5n1: ios=10064/0, merge=0/0, ticks=1234237/0, in_queue=1234237, util=98.45% 00:25:14.840 nvme6n1: ios=10892/0, merge=0/0, ticks=1239866/0, in_queue=1239866, util=98.55% 00:25:14.840 nvme7n1: ios=13693/0, merge=0/0, ticks=1238827/0, in_queue=1238827, util=98.94% 00:25:14.840 nvme8n1: ios=11341/0, merge=0/0, ticks=1230916/0, in_queue=1230916, util=99.08% 00:25:14.840 nvme9n1: ios=9031/0, merge=0/0, ticks=1236754/0, in_queue=1236754, util=99.19% 00:25:14.840 00:05:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:14.840 [global] 00:25:14.840 thread=1 00:25:14.840 invalidate=1 00:25:14.840 rw=randwrite 00:25:14.840 time_based=1 00:25:14.840 runtime=10 00:25:14.840 ioengine=libaio 00:25:14.840 direct=1 00:25:14.840 bs=262144 00:25:14.840 iodepth=64 00:25:14.840 norandommap=1 00:25:14.840 numjobs=1 00:25:14.840 00:25:14.840 [job0] 00:25:14.840 filename=/dev/nvme0n1 00:25:14.840 [job1] 00:25:14.840 filename=/dev/nvme10n1 00:25:14.840 [job2] 00:25:14.840 filename=/dev/nvme1n1 00:25:14.840 [job3] 00:25:14.840 filename=/dev/nvme2n1 00:25:14.840 [job4] 00:25:14.840 filename=/dev/nvme3n1 00:25:14.840 [job5] 00:25:14.840 filename=/dev/nvme4n1 00:25:14.840 [job6] 00:25:14.840 filename=/dev/nvme5n1 00:25:14.840 [job7] 00:25:14.840 filename=/dev/nvme6n1 00:25:14.840 [job8] 00:25:14.841 filename=/dev/nvme7n1 00:25:14.841 [job9] 00:25:14.841 filename=/dev/nvme8n1 00:25:14.841 [job10] 00:25:14.841 filename=/dev/nvme9n1 00:25:14.841 Could not set queue depth (nvme0n1) 00:25:14.841 Could not set queue depth (nvme10n1) 00:25:14.841 Could not set queue depth (nvme1n1) 00:25:14.841 Could not set queue depth (nvme2n1) 00:25:14.841 Could not set queue depth (nvme3n1) 00:25:14.841 Could not set queue depth (nvme4n1) 00:25:14.841 Could not set queue depth (nvme5n1) 00:25:14.841 Could not set queue depth (nvme6n1) 00:25:14.841 Could not set queue depth (nvme7n1) 00:25:14.841 Could not set queue depth (nvme8n1) 00:25:14.841 Could not set queue depth (nvme9n1) 00:25:14.841 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.841 fio-3.35 00:25:14.841 Starting 11 threads 00:25:24.853 00:25:24.853 job0: (groupid=0, jobs=1): err= 0: pid=847785: Thu Jul 25 00:05:29 2024 00:25:24.853 write: IOPS=467, BW=117MiB/s (123MB/s)(1181MiB/10107msec); 0 zone resets 00:25:24.853 slat (usec): min=23, max=133172, avg=1541.47, stdev=4308.24 00:25:24.853 clat (msec): min=2, max=368, avg=135.26, stdev=61.06 00:25:24.853 lat (msec): min=2, max=368, avg=136.80, stdev=62.04 00:25:24.853 clat percentiles (msec): 00:25:24.853 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 49], 20.00th=[ 88], 00:25:24.853 | 30.00th=[ 112], 40.00th=[ 122], 50.00th=[ 130], 60.00th=[ 148], 00:25:24.853 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 215], 95.00th=[ 245], 00:25:24.853 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 368], 00:25:24.853 | 99.99th=[ 368] 00:25:24.853 bw ( KiB/s): min=67584, max=217088, per=8.83%, avg=119347.40, stdev=40413.28, samples=20 00:25:24.853 iops : min= 264, max= 848, avg=466.15, stdev=157.91, samples=20 00:25:24.853 lat (msec) : 4=0.23%, 10=0.68%, 20=2.79%, 50=6.69%, 100=13.61% 00:25:24.853 lat (msec) : 250=72.25%, 500=3.75% 00:25:24.853 cpu : usr=1.73%, sys=1.87%, ctx=2466, majf=0, minf=1 00:25:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.853 issued rwts: total=0,4725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.853 job1: (groupid=0, jobs=1): err= 0: pid=847786: Thu Jul 25 00:05:29 2024 00:25:24.853 write: IOPS=436, BW=109MiB/s (114MB/s)(1102MiB/10106msec); 0 zone resets 00:25:24.853 slat (usec): min=21, max=73987, avg=1760.16, stdev=4308.29 00:25:24.853 clat (msec): min=2, max=285, avg=144.91, stdev=56.09 00:25:24.853 lat (msec): min=2, max=288, avg=146.67, stdev=56.95 00:25:24.853 clat percentiles (msec): 00:25:24.853 | 1.00th=[ 15], 5.00th=[ 49], 10.00th=[ 77], 20.00th=[ 102], 00:25:24.853 | 30.00th=[ 116], 40.00th=[ 129], 50.00th=[ 144], 60.00th=[ 157], 00:25:24.853 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 247], 00:25:24.853 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:25:24.853 | 99.99th=[ 288] 00:25:24.853 bw ( KiB/s): min=70144, max=177152, per=8.23%, avg=111208.75, stdev=29488.73, samples=20 00:25:24.853 iops : min= 274, max= 692, avg=434.35, stdev=115.22, samples=20 00:25:24.853 lat (msec) : 4=0.20%, 10=0.61%, 20=0.64%, 50=3.88%, 100=14.36% 00:25:24.853 lat (msec) : 250=76.11%, 500=4.20% 00:25:24.853 cpu : usr=1.62%, sys=1.70%, ctx=2190, majf=0, minf=1 00:25:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.853 issued rwts: total=0,4407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.853 job2: (groupid=0, jobs=1): err= 0: pid=847787: Thu Jul 25 00:05:29 2024 00:25:24.853 write: IOPS=484, BW=121MiB/s (127MB/s)(1222MiB/10091msec); 0 zone resets 00:25:24.853 slat (usec): min=21, max=55296, avg=1702.75, stdev=4017.08 00:25:24.853 clat (msec): min=3, max=265, avg=130.39, stdev=56.00 00:25:24.853 lat (msec): min=3, max=270, avg=132.10, stdev=56.85 00:25:24.853 clat percentiles (msec): 00:25:24.853 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 53], 20.00th=[ 85], 00:25:24.853 | 30.00th=[ 99], 40.00th=[ 117], 50.00th=[ 136], 60.00th=[ 150], 00:25:24.853 | 70.00th=[ 159], 80.00th=[ 184], 90.00th=[ 205], 95.00th=[ 220], 00:25:24.853 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 262], 00:25:24.853 | 99.99th=[ 266] 00:25:24.853 bw ( KiB/s): min=79872, max=204800, per=9.14%, avg=123470.20, stdev=43992.52, samples=20 00:25:24.853 iops : min= 312, max= 800, avg=482.25, stdev=171.88, samples=20 00:25:24.853 lat (msec) : 4=0.04%, 10=0.94%, 20=2.35%, 50=6.22%, 100=21.74% 00:25:24.853 lat (msec) : 250=68.01%, 500=0.70% 00:25:24.853 cpu : usr=1.58%, sys=2.13%, ctx=2212, majf=0, minf=1 00:25:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.853 issued rwts: total=0,4886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.853 job3: (groupid=0, jobs=1): err= 0: pid=847806: Thu Jul 25 00:05:29 2024 00:25:24.853 write: IOPS=454, BW=114MiB/s (119MB/s)(1157MiB/10177msec); 0 zone resets 00:25:24.853 slat (usec): min=22, max=61915, avg=1615.81, stdev=4057.65 00:25:24.853 clat (usec): min=1367, max=376036, avg=139012.96, stdev=63310.00 00:25:24.853 lat (usec): min=1460, max=376079, avg=140628.78, stdev=64194.99 00:25:24.853 clat percentiles (msec): 00:25:24.853 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 54], 20.00th=[ 79], 00:25:24.853 | 30.00th=[ 99], 40.00th=[ 122], 50.00th=[ 142], 60.00th=[ 163], 00:25:24.853 | 70.00th=[ 186], 80.00th=[ 197], 90.00th=[ 211], 95.00th=[ 239], 00:25:24.853 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 363], 99.95th=[ 363], 00:25:24.853 | 99.99th=[ 376] 00:25:24.853 bw ( KiB/s): min=67584, max=237056, per=8.65%, avg=116869.30, stdev=42523.51, samples=20 00:25:24.853 iops : min= 264, max= 926, avg=456.50, stdev=166.12, samples=20 00:25:24.853 lat (msec) : 2=0.09%, 4=0.15%, 10=0.82%, 20=1.36%, 50=6.76% 00:25:24.853 lat (msec) : 100=21.86%, 250=65.41%, 500=3.54% 00:25:24.853 cpu : usr=1.49%, sys=1.83%, ctx=2495, majf=0, minf=1 00:25:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.853 issued rwts: total=0,4629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.853 job4: (groupid=0, jobs=1): err= 0: pid=847807: Thu Jul 25 00:05:29 2024 00:25:24.853 write: IOPS=504, BW=126MiB/s (132MB/s)(1283MiB/10176msec); 0 zone resets 00:25:24.853 slat (usec): min=15, max=50574, avg=1308.84, stdev=3643.90 00:25:24.853 clat (usec): min=1607, max=371272, avg=125490.50, stdev=67760.21 00:25:24.853 lat (usec): min=1643, max=371312, avg=126799.35, stdev=68686.42 00:25:24.853 clat percentiles (msec): 00:25:24.853 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 65], 00:25:24.853 | 30.00th=[ 81], 40.00th=[ 99], 50.00th=[ 121], 60.00th=[ 142], 00:25:24.853 | 70.00th=[ 163], 80.00th=[ 190], 90.00th=[ 218], 95.00th=[ 245], 00:25:24.853 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 355], 99.95th=[ 355], 00:25:24.853 | 99.99th=[ 372] 00:25:24.854 bw ( KiB/s): min=65536, max=234027, per=9.61%, avg=129778.05, stdev=45330.34, samples=20 00:25:24.854 iops : min= 256, max= 914, avg=506.90, stdev=177.08, samples=20 00:25:24.854 lat (msec) : 2=0.04%, 4=0.10%, 10=1.03%, 20=2.88%, 50=10.42% 00:25:24.854 lat (msec) : 100=26.32%, 250=55.04%, 500=4.17% 00:25:24.854 cpu : usr=1.57%, sys=2.22%, ctx=3059, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,5133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job5: (groupid=0, jobs=1): err= 0: pid=847808: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=598, BW=150MiB/s (157MB/s)(1524MiB/10188msec); 0 zone resets 00:25:24.854 slat (usec): min=17, max=91572, avg=919.32, stdev=3485.56 00:25:24.854 clat (usec): min=1639, max=296591, avg=105781.94, stdev=67155.97 00:25:24.854 lat (usec): min=1671, max=334366, avg=106701.25, stdev=67884.66 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 43], 00:25:24.854 | 30.00th=[ 57], 40.00th=[ 80], 50.00th=[ 105], 60.00th=[ 123], 00:25:24.854 | 70.00th=[ 138], 80.00th=[ 163], 90.00th=[ 197], 95.00th=[ 224], 00:25:24.854 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:25:24.854 | 99.99th=[ 296] 00:25:24.854 bw ( KiB/s): min=69258, max=235008, per=11.43%, avg=154365.35, stdev=44733.87, samples=20 00:25:24.854 iops : min= 270, max= 918, avg=602.95, stdev=174.81, samples=20 00:25:24.854 lat (msec) : 2=0.10%, 4=0.38%, 10=3.97%, 20=6.07%, 50=15.57% 00:25:24.854 lat (msec) : 100=22.28%, 250=48.51%, 500=3.12% 00:25:24.854 cpu : usr=2.28%, sys=2.26%, ctx=4296, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,6094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job6: (groupid=0, jobs=1): err= 0: pid=847809: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=540, BW=135MiB/s (142MB/s)(1377MiB/10187msec); 0 zone resets 00:25:24.854 slat (usec): min=17, max=201950, avg=1169.47, stdev=5090.46 00:25:24.854 clat (msec): min=2, max=406, avg=116.79, stdev=60.39 00:25:24.854 lat (msec): min=2, max=406, avg=117.95, stdev=61.07 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 51], 20.00th=[ 59], 00:25:24.854 | 30.00th=[ 72], 40.00th=[ 91], 50.00th=[ 113], 60.00th=[ 131], 00:25:24.854 | 70.00th=[ 153], 80.00th=[ 178], 90.00th=[ 197], 95.00th=[ 207], 00:25:24.854 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 405], 99.95th=[ 405], 00:25:24.854 | 99.99th=[ 405] 00:25:24.854 bw ( KiB/s): min=73728, max=252408, per=10.32%, avg=139434.35, stdev=46491.75, samples=20 00:25:24.854 iops : min= 288, max= 985, avg=544.60, stdev=181.51, samples=20 00:25:24.854 lat (msec) : 4=0.20%, 10=1.09%, 20=1.94%, 50=6.64%, 100=33.56% 00:25:24.854 lat (msec) : 250=55.13%, 500=1.43% 00:25:24.854 cpu : usr=2.03%, sys=2.37%, ctx=3012, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,5509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job7: (groupid=0, jobs=1): err= 0: pid=847810: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=439, BW=110MiB/s (115MB/s)(1119MiB/10180msec); 0 zone resets 00:25:24.854 slat (usec): min=16, max=50516, avg=1664.89, stdev=3992.07 00:25:24.854 clat (msec): min=7, max=378, avg=143.75, stdev=55.17 00:25:24.854 lat (msec): min=7, max=378, avg=145.42, stdev=56.07 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 63], 20.00th=[ 100], 00:25:24.854 | 30.00th=[ 118], 40.00th=[ 132], 50.00th=[ 146], 60.00th=[ 163], 00:25:24.854 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 209], 95.00th=[ 226], 00:25:24.854 | 99.00th=[ 251], 99.50th=[ 284], 99.90th=[ 368], 99.95th=[ 368], 00:25:24.854 | 99.99th=[ 380] 00:25:24.854 bw ( KiB/s): min=74240, max=158720, per=8.37%, avg=113002.95, stdev=27670.85, samples=20 00:25:24.854 iops : min= 290, max= 620, avg=441.35, stdev=108.08, samples=20 00:25:24.854 lat (msec) : 10=0.02%, 20=0.56%, 50=5.54%, 100=14.25%, 250=78.58% 00:25:24.854 lat (msec) : 500=1.05% 00:25:24.854 cpu : usr=1.80%, sys=1.76%, ctx=2364, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,4477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job8: (groupid=0, jobs=1): err= 0: pid=847811: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=400, BW=100MiB/s (105MB/s)(1018MiB/10176msec); 0 zone resets 00:25:24.854 slat (usec): min=25, max=70550, avg=2164.40, stdev=4535.31 00:25:24.854 clat (msec): min=3, max=368, avg=157.71, stdev=50.91 00:25:24.854 lat (msec): min=5, max=368, avg=159.87, stdev=51.65 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 20], 5.00th=[ 62], 10.00th=[ 92], 20.00th=[ 126], 00:25:24.854 | 30.00th=[ 136], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 169], 00:25:24.854 | 70.00th=[ 186], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 239], 00:25:24.854 | 99.00th=[ 262], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 359], 00:25:24.854 | 99.99th=[ 368] 00:25:24.854 bw ( KiB/s): min=73728, max=152576, per=7.59%, avg=102588.00, stdev=22929.44, samples=20 00:25:24.854 iops : min= 288, max= 596, avg=400.70, stdev=89.60, samples=20 00:25:24.854 lat (msec) : 4=0.02%, 10=0.22%, 20=0.91%, 50=3.32%, 100=7.98% 00:25:24.854 lat (msec) : 250=84.70%, 500=2.85% 00:25:24.854 cpu : usr=1.64%, sys=1.36%, ctx=1614, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,4071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job9: (groupid=0, jobs=1): err= 0: pid=847812: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=391, BW=97.8MiB/s (103MB/s)(988MiB/10107msec); 0 zone resets 00:25:24.854 slat (usec): min=26, max=40489, avg=2480.21, stdev=4569.94 00:25:24.854 clat (msec): min=8, max=255, avg=161.06, stdev=38.13 00:25:24.854 lat (msec): min=8, max=255, avg=163.54, stdev=38.50 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 50], 5.00th=[ 112], 10.00th=[ 120], 20.00th=[ 129], 00:25:24.854 | 30.00th=[ 142], 40.00th=[ 150], 50.00th=[ 157], 60.00th=[ 165], 00:25:24.854 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 230], 00:25:24.854 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 255], 99.95th=[ 255], 00:25:24.854 | 99.99th=[ 255] 00:25:24.854 bw ( KiB/s): min=79872, max=133632, per=7.37%, avg=99550.20, stdev=16883.79, samples=20 00:25:24.854 iops : min= 312, max= 522, avg=388.85, stdev=65.97, samples=20 00:25:24.854 lat (msec) : 10=0.03%, 50=0.99%, 100=1.57%, 250=96.13%, 500=1.29% 00:25:24.854 cpu : usr=1.50%, sys=1.37%, ctx=1128, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,3952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 job10: (groupid=0, jobs=1): err= 0: pid=847813: Thu Jul 25 00:05:29 2024 00:25:24.854 write: IOPS=582, BW=146MiB/s (153MB/s)(1470MiB/10093msec); 0 zone resets 00:25:24.854 slat (usec): min=19, max=264667, avg=1331.53, stdev=4653.32 00:25:24.854 clat (msec): min=2, max=363, avg=108.50, stdev=55.94 00:25:24.854 lat (msec): min=2, max=363, avg=109.83, stdev=56.51 00:25:24.854 clat percentiles (msec): 00:25:24.854 | 1.00th=[ 17], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 66], 00:25:24.854 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 106], 00:25:24.854 | 70.00th=[ 124], 80.00th=[ 153], 90.00th=[ 197], 95.00th=[ 215], 00:25:24.854 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 321], 00:25:24.854 | 99.99th=[ 363] 00:25:24.854 bw ( KiB/s): min=79872, max=279552, per=11.02%, avg=148852.10, stdev=54938.61, samples=20 00:25:24.854 iops : min= 312, max= 1092, avg=581.40, stdev=214.62, samples=20 00:25:24.854 lat (msec) : 4=0.02%, 10=0.46%, 20=0.83%, 50=9.75%, 100=45.03% 00:25:24.854 lat (msec) : 250=41.56%, 500=2.35% 00:25:24.854 cpu : usr=2.21%, sys=2.45%, ctx=2508, majf=0, minf=1 00:25:24.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:24.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.854 issued rwts: total=0,5878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.854 00:25:24.854 Run status group 0 (all jobs): 00:25:24.854 WRITE: bw=1319MiB/s (1383MB/s), 97.8MiB/s-150MiB/s (103MB/s-157MB/s), io=13.1GiB (14.1GB), run=10091-10188msec 00:25:24.854 00:25:24.854 Disk stats (read/write): 00:25:24.854 nvme0n1: ios=41/9246, merge=0/0, ticks=282/1218496, in_queue=1218778, util=99.72% 00:25:24.854 nvme10n1: ios=51/8612, merge=0/0, ticks=2737/1214249, in_queue=1216986, util=99.79% 00:25:24.854 nvme1n1: ios=50/9554, merge=0/0, ticks=3208/1213266, in_queue=1216474, util=99.99% 00:25:24.854 nvme2n1: ios=43/9249, merge=0/0, ticks=40/1245467, in_queue=1245507, util=97.88% 00:25:24.854 nvme3n1: ios=26/10265, merge=0/0, ticks=33/1248901, in_queue=1248934, util=97.95% 00:25:24.855 nvme4n1: ios=52/12172, merge=0/0, ticks=1668/1251364, in_queue=1253032, util=100.00% 00:25:24.855 nvme5n1: ios=49/11006, merge=0/0, ticks=3406/1190629, in_queue=1194035, util=100.00% 00:25:24.855 nvme6n1: ios=45/8950, merge=0/0, ticks=776/1245338, in_queue=1246114, util=100.00% 00:25:24.855 nvme7n1: ios=39/8134, merge=0/0, ticks=195/1238364, in_queue=1238559, util=100.00% 00:25:24.855 nvme8n1: ios=40/7702, merge=0/0, ticks=1363/1206761, in_queue=1208124, util=100.00% 00:25:24.855 nvme9n1: ios=45/11533, merge=0/0, ticks=2577/1194911, in_queue=1197488, util=100.00% 00:25:24.855 00:05:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:24.855 00:05:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:24.855 00:05:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.855 00:05:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:24.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:24.855 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.855 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:25.114 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.114 00:05:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:25.374 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.374 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:25.634 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.634 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:25.893 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:25.893 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.893 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:26.151 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:26.151 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.151 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:26.410 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:26.410 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.411 00:05:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:26.411 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:26.411 rmmod nvme_tcp 00:25:26.411 rmmod nvme_fabrics 00:25:26.411 rmmod nvme_keyring 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 842506 ']' 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 842506 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 842506 ']' 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 842506 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:26.411 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 842506 00:25:26.670 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:26.670 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:26.670 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 842506' 00:25:26.670 killing process with pid 842506 00:25:26.670 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 842506 00:25:26.670 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 842506 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.241 00:05:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.149 00:05:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:29.149 00:25:29.149 real 1m0.277s 00:25:29.149 user 3m18.140s 00:25:29.149 sys 0m25.462s 00:25:29.149 00:05:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:29.149 00:05:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.149 ************************************ 00:25:29.149 END TEST nvmf_multiconnection 00:25:29.149 ************************************ 00:25:29.149 00:05:34 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:29.149 00:05:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:29.149 00:05:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:29.149 00:05:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.149 ************************************ 00:25:29.149 START TEST nvmf_initiator_timeout 00:25:29.149 ************************************ 00:25:29.149 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:29.149 * Looking for test storage... 00:25:29.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.149 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.150 00:05:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:31.686 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:31.686 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:31.686 Found net devices under 0000:09:00.0: cvl_0_0 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.686 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:31.687 Found net devices under 0000:09:00.1: cvl_0_1 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:31.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:25:31.687 00:25:31.687 --- 10.0.0.2 ping statistics --- 00:25:31.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.687 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:25:31.687 00:25:31.687 --- 10.0.0.1 ping statistics --- 00:25:31.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.687 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=851132 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 851132 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 851132 ']' 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:31.687 00:05:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 [2024-07-25 00:05:37.016087] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:31.687 [2024-07-25 00:05:37.016202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.687 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.687 [2024-07-25 00:05:37.084685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.687 [2024-07-25 00:05:37.177265] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.687 [2024-07-25 00:05:37.177321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.687 [2024-07-25 00:05:37.177349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.687 [2024-07-25 00:05:37.177363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.687 [2024-07-25 00:05:37.177374] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.687 [2024-07-25 00:05:37.177471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.687 [2024-07-25 00:05:37.177549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.687 [2024-07-25 00:05:37.177645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.687 [2024-07-25 00:05:37.177647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 Malloc0 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 Delay0 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 [2024-07-25 00:05:37.365494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:31.687 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.688 [2024-07-25 00:05:37.393740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.688 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:32.623 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:32.623 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:32.623 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.623 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:32.623 00:05:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:34.525 00:05:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=851499 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:34.525 00:05:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:34.525 [global] 00:25:34.525 thread=1 00:25:34.525 invalidate=1 00:25:34.525 rw=write 00:25:34.525 time_based=1 00:25:34.525 runtime=60 00:25:34.525 ioengine=libaio 00:25:34.525 direct=1 00:25:34.525 bs=4096 00:25:34.525 iodepth=1 00:25:34.525 norandommap=0 00:25:34.525 numjobs=1 00:25:34.525 00:25:34.525 verify_dump=1 00:25:34.525 verify_backlog=512 00:25:34.525 verify_state_save=0 00:25:34.525 do_verify=1 00:25:34.525 verify=crc32c-intel 00:25:34.525 [job0] 00:25:34.525 filename=/dev/nvme0n1 00:25:34.525 Could not set queue depth (nvme0n1) 00:25:34.525 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:34.525 fio-3.35 00:25:34.525 Starting 1 thread 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.813 true 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.813 true 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.813 true 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.813 true 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.813 00:05:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.346 true 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.346 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.606 true 00:25:40.606 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.606 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:40.606 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.606 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.606 true 00:25:40.606 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.607 true 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:40.607 00:05:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 851499 00:26:36.867 00:26:36.867 job0: (groupid=0, jobs=1): err= 0: pid=851629: Thu Jul 25 00:06:40 2024 00:26:36.867 read: IOPS=120, BW=483KiB/s (495kB/s)(28.3MiB/60001msec) 00:26:36.867 slat (nsec): min=4761, max=75905, avg=18970.67, stdev=10582.51 00:26:36.867 clat (usec): min=294, max=41221k, avg=7951.15, stdev=484229.31 00:26:36.867 lat (usec): min=300, max=41221k, avg=7970.12, stdev=484229.48 00:26:36.867 clat percentiles (usec): 00:26:36.867 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 334], 00:26:36.867 | 20.00th=[ 347], 30.00th=[ 363], 40.00th=[ 379], 00:26:36.867 | 50.00th=[ 388], 60.00th=[ 404], 70.00th=[ 433], 00:26:36.867 | 80.00th=[ 465], 90.00th=[ 494], 95.00th=[ 553], 00:26:36.867 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:26:36.867 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:36.867 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60001msec); 0 zone resets 00:26:36.867 slat (nsec): min=6182, max=71729, avg=16611.81, stdev=9566.54 00:26:36.867 clat (usec): min=207, max=809, avg=264.13, stdev=43.70 00:26:36.867 lat (usec): min=214, max=823, avg=280.74, stdev=48.03 00:26:36.867 clat percentiles (usec): 00:26:36.867 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:26:36.867 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:26:36.867 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 371], 00:26:36.867 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 457], 99.95th=[ 506], 00:26:36.867 | 99.99th=[ 807] 00:26:36.867 bw ( KiB/s): min= 1744, max= 8192, per=100.00%, avg=4922.91, stdev=1788.52, samples=11 00:26:36.867 iops : min= 436, max= 2048, avg=1230.73, stdev=447.13, samples=11 00:26:36.867 lat (usec) : 250=27.87%, 500=68.47%, 750=1.40%, 1000=0.01% 00:26:36.867 lat (msec) : 4=0.01%, 50=2.23%, >=2000=0.01% 00:26:36.867 cpu : usr=0.26%, sys=0.44%, ctx=14928, majf=0, minf=2 00:26:36.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.867 issued rwts: total=7248,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:36.867 00:26:36.867 Run status group 0 (all jobs): 00:26:36.867 READ: bw=483KiB/s (495kB/s), 483KiB/s-483KiB/s (495kB/s-495kB/s), io=28.3MiB (29.7MB), run=60001-60001msec 00:26:36.867 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60001-60001msec 00:26:36.867 00:26:36.867 Disk stats (read/write): 00:26:36.867 nvme0n1: ios=7267/7413, merge=0/0, ticks=17669/1870, in_queue=19539, util=99.88% 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:36.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:36.867 nvmf hotplug test: fio successful as expected 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.867 rmmod nvme_tcp 00:26:36.867 rmmod nvme_fabrics 00:26:36.867 rmmod nvme_keyring 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 851132 ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 851132 ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 851132' 00:26:36.867 killing process with pid 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 851132 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.867 00:06:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.435 00:06:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.435 00:26:37.435 real 1m8.108s 00:26:37.435 user 4m9.764s 00:26:37.435 sys 0m7.228s 00:26:37.435 00:06:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:37.435 00:06:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.435 ************************************ 00:26:37.435 END TEST nvmf_initiator_timeout 00:26:37.435 ************************************ 00:26:37.435 00:06:42 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:37.435 00:06:42 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:37.435 00:06:42 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:37.435 00:06:42 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.435 00:06:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:39.338 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:39.338 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.338 00:06:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:39.339 Found net devices under 0000:09:00.0: cvl_0_0 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:39.339 Found net devices under 0000:09:00.1: cvl_0_1 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:39.339 00:06:44 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:39.339 00:06:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:39.339 00:06:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:39.339 00:06:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.339 ************************************ 00:26:39.339 START TEST nvmf_perf_adq 00:26:39.339 ************************************ 00:26:39.339 00:06:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:39.339 * Looking for test storage... 00:26:39.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.339 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.599 00:06:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:39.599 00:06:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.599 00:06:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:41.506 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:41.506 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:41.506 Found net devices under 0000:09:00.0: cvl_0_0 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:41.506 Found net devices under 0000:09:00.1: cvl_0_1 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.506 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.507 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:41.507 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:41.507 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:41.507 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:42.443 00:06:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:44.350 00:06:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:49.626 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.626 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:49.627 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:49.627 Found net devices under 0000:09:00.0: cvl_0_0 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:49.627 Found net devices under 0000:09:00.1: cvl_0_1 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:49.627 00:26:49.627 --- 10.0.0.2 ping statistics --- 00:26:49.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.627 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:49.627 00:26:49.627 --- 10.0.0.1 ping statistics --- 00:26:49.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.627 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=863767 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 863767 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 863767 ']' 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:49.627 00:06:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.627 [2024-07-25 00:06:54.921622] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:49.627 [2024-07-25 00:06:54.921723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.627 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.627 [2024-07-25 00:06:54.990561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.627 [2024-07-25 00:06:55.083062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.627 [2024-07-25 00:06:55.083139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.627 [2024-07-25 00:06:55.083167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.627 [2024-07-25 00:06:55.083180] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.627 [2024-07-25 00:06:55.083193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.627 [2024-07-25 00:06:55.083295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.627 [2024-07-25 00:06:55.083372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.627 [2024-07-25 00:06:55.083461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.627 [2024-07-25 00:06:55.083463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:49.627 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.628 [2024-07-25 00:06:55.312770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.628 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.886 Malloc1 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.886 [2024-07-25 00:06:55.364319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=863804 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:49.886 00:06:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:49.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.790 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:51.790 00:06:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.790 00:06:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.790 00:06:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.790 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:51.790 "tick_rate": 2700000000, 00:26:51.790 "poll_groups": [ 00:26:51.790 { 00:26:51.790 "name": "nvmf_tgt_poll_group_000", 00:26:51.790 "admin_qpairs": 1, 00:26:51.790 "io_qpairs": 1, 00:26:51.790 "current_admin_qpairs": 1, 00:26:51.790 "current_io_qpairs": 1, 00:26:51.790 "pending_bdev_io": 0, 00:26:51.790 "completed_nvme_io": 21027, 00:26:51.790 "transports": [ 00:26:51.790 { 00:26:51.790 "trtype": "TCP" 00:26:51.790 } 00:26:51.790 ] 00:26:51.790 }, 00:26:51.790 { 00:26:51.790 "name": "nvmf_tgt_poll_group_001", 00:26:51.790 "admin_qpairs": 0, 00:26:51.790 "io_qpairs": 1, 00:26:51.790 "current_admin_qpairs": 0, 00:26:51.790 "current_io_qpairs": 1, 00:26:51.790 "pending_bdev_io": 0, 00:26:51.790 "completed_nvme_io": 17505, 00:26:51.791 "transports": [ 00:26:51.791 { 00:26:51.791 "trtype": "TCP" 00:26:51.791 } 00:26:51.791 ] 00:26:51.791 }, 00:26:51.791 { 00:26:51.791 "name": "nvmf_tgt_poll_group_002", 00:26:51.791 "admin_qpairs": 0, 00:26:51.791 "io_qpairs": 1, 00:26:51.791 "current_admin_qpairs": 0, 00:26:51.791 "current_io_qpairs": 1, 00:26:51.791 "pending_bdev_io": 0, 00:26:51.791 "completed_nvme_io": 20476, 00:26:51.791 "transports": [ 00:26:51.791 { 00:26:51.791 "trtype": "TCP" 00:26:51.791 } 00:26:51.791 ] 00:26:51.791 }, 00:26:51.791 { 00:26:51.791 "name": "nvmf_tgt_poll_group_003", 00:26:51.791 "admin_qpairs": 0, 00:26:51.791 "io_qpairs": 1, 00:26:51.791 "current_admin_qpairs": 0, 00:26:51.791 "current_io_qpairs": 1, 00:26:51.791 "pending_bdev_io": 0, 00:26:51.791 "completed_nvme_io": 17893, 00:26:51.791 "transports": [ 00:26:51.791 { 00:26:51.791 "trtype": "TCP" 00:26:51.791 } 00:26:51.791 ] 00:26:51.791 } 00:26:51.791 ] 00:26:51.791 }' 00:26:51.791 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:51.791 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:51.791 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:51.791 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:51.791 00:06:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 863804 00:26:59.911 Initializing NVMe Controllers 00:26:59.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:59.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:59.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:59.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:59.912 Initialization complete. Launching workers. 00:26:59.912 ======================================================== 00:26:59.912 Latency(us) 00:26:59.912 Device Information : IOPS MiB/s Average min max 00:26:59.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10761.70 42.04 5947.59 2127.55 7779.58 00:26:59.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9170.00 35.82 6980.25 2517.28 11183.23 00:26:59.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11073.70 43.26 5779.55 3951.09 7410.69 00:26:59.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9353.40 36.54 6842.55 2842.10 11164.01 00:26:59.912 ======================================================== 00:26:59.912 Total : 40358.80 157.65 6343.53 2127.55 11183.23 00:26:59.912 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.912 rmmod nvme_tcp 00:26:59.912 rmmod nvme_fabrics 00:26:59.912 rmmod nvme_keyring 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 863767 ']' 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 863767 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 863767 ']' 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 863767 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 863767 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 863767' 00:26:59.912 killing process with pid 863767 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 863767 00:26:59.912 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 863767 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.171 00:07:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.712 00:07:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.713 00:07:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:02.713 00:07:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:02.971 00:07:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:04.906 00:07:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.186 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:10.187 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:10.187 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:10.187 Found net devices under 0000:09:00.0: cvl_0_0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:10.187 Found net devices under 0000:09:00.1: cvl_0_1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:27:10.187 00:27:10.187 --- 10.0.0.2 ping statistics --- 00:27:10.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.187 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:27:10.187 00:27:10.187 --- 10.0.0.1 ping statistics --- 00:27:10.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.187 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:10.187 net.core.busy_poll = 1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:10.187 net.core.busy_read = 1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:10.187 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=866418 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 866418 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 866418 ']' 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:10.188 00:07:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.188 [2024-07-25 00:07:15.895009] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:10.188 [2024-07-25 00:07:15.895083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.446 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.446 [2024-07-25 00:07:15.963287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.446 [2024-07-25 00:07:16.056592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.446 [2024-07-25 00:07:16.056651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.446 [2024-07-25 00:07:16.056667] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.446 [2024-07-25 00:07:16.056680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.446 [2024-07-25 00:07:16.056691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.446 [2024-07-25 00:07:16.056778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.446 [2024-07-25 00:07:16.056846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.446 [2024-07-25 00:07:16.056936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.446 [2024-07-25 00:07:16.056938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.446 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:10.704 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 [2024-07-25 00:07:16.294121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 Malloc1 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.705 [2024-07-25 00:07:16.346246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=866562 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:10.705 00:07:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:10.705 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.233 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:13.234 "tick_rate": 2700000000, 00:27:13.234 "poll_groups": [ 00:27:13.234 { 00:27:13.234 "name": "nvmf_tgt_poll_group_000", 00:27:13.234 "admin_qpairs": 1, 00:27:13.234 "io_qpairs": 1, 00:27:13.234 "current_admin_qpairs": 1, 00:27:13.234 "current_io_qpairs": 1, 00:27:13.234 "pending_bdev_io": 0, 00:27:13.234 "completed_nvme_io": 25140, 00:27:13.234 "transports": [ 00:27:13.234 { 00:27:13.234 "trtype": "TCP" 00:27:13.234 } 00:27:13.234 ] 00:27:13.234 }, 00:27:13.234 { 00:27:13.234 "name": "nvmf_tgt_poll_group_001", 00:27:13.234 "admin_qpairs": 0, 00:27:13.234 "io_qpairs": 3, 00:27:13.234 "current_admin_qpairs": 0, 00:27:13.234 "current_io_qpairs": 3, 00:27:13.234 "pending_bdev_io": 0, 00:27:13.234 "completed_nvme_io": 26508, 00:27:13.234 "transports": [ 00:27:13.234 { 00:27:13.234 "trtype": "TCP" 00:27:13.234 } 00:27:13.234 ] 00:27:13.234 }, 00:27:13.234 { 00:27:13.234 "name": "nvmf_tgt_poll_group_002", 00:27:13.234 "admin_qpairs": 0, 00:27:13.234 "io_qpairs": 0, 00:27:13.234 "current_admin_qpairs": 0, 00:27:13.234 "current_io_qpairs": 0, 00:27:13.234 "pending_bdev_io": 0, 00:27:13.234 "completed_nvme_io": 0, 00:27:13.234 "transports": [ 00:27:13.234 { 00:27:13.234 "trtype": "TCP" 00:27:13.234 } 00:27:13.234 ] 00:27:13.234 }, 00:27:13.234 { 00:27:13.234 "name": "nvmf_tgt_poll_group_003", 00:27:13.234 "admin_qpairs": 0, 00:27:13.234 "io_qpairs": 0, 00:27:13.234 "current_admin_qpairs": 0, 00:27:13.234 "current_io_qpairs": 0, 00:27:13.234 "pending_bdev_io": 0, 00:27:13.234 "completed_nvme_io": 0, 00:27:13.234 "transports": [ 00:27:13.234 { 00:27:13.234 "trtype": "TCP" 00:27:13.234 } 00:27:13.234 ] 00:27:13.234 } 00:27:13.234 ] 00:27:13.234 }' 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:13.234 00:07:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 866562 00:27:21.343 Initializing NVMe Controllers 00:27:21.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:21.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:21.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:21.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:21.343 Initialization complete. Launching workers. 00:27:21.343 ======================================================== 00:27:21.343 Latency(us) 00:27:21.343 Device Information : IOPS MiB/s Average min max 00:27:21.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5409.10 21.13 11833.99 1678.39 61884.42 00:27:21.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13155.00 51.39 4881.14 1748.61 46798.10 00:27:21.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4465.50 17.44 14340.85 1864.24 61030.01 00:27:21.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4128.40 16.13 15504.94 1687.76 62602.95 00:27:21.343 ======================================================== 00:27:21.343 Total : 27158.00 106.09 9436.35 1678.39 62602.95 00:27:21.343 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.343 rmmod nvme_tcp 00:27:21.343 rmmod nvme_fabrics 00:27:21.343 rmmod nvme_keyring 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 866418 ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 866418 ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 866418' 00:27:21.343 killing process with pid 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 866418 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.343 00:07:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.248 00:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:23.248 00:07:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:23.248 00:27:23.248 real 0m43.929s 00:27:23.248 user 2m32.916s 00:27:23.248 sys 0m12.301s 00:27:23.248 00:07:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:23.248 00:07:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 ************************************ 00:27:23.248 END TEST nvmf_perf_adq 00:27:23.248 ************************************ 00:27:23.248 00:07:28 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:23.248 00:07:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:23.248 00:07:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:23.248 00:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.506 ************************************ 00:27:23.506 START TEST nvmf_shutdown 00:27:23.506 ************************************ 00:27:23.507 00:07:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:23.507 * Looking for test storage... 00:27:23.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:23.507 ************************************ 00:27:23.507 START TEST nvmf_shutdown_tc1 00:27:23.507 ************************************ 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.507 00:07:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:26.037 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.037 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:26.038 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:26.038 Found net devices under 0000:09:00.0: cvl_0_0 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:26.038 Found net devices under 0000:09:00.1: cvl_0_1 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:27:26.038 00:27:26.038 --- 10.0.0.2 ping statistics --- 00:27:26.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.038 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:26.038 00:27:26.038 --- 10.0.0.1 ping statistics --- 00:27:26.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.038 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.038 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=869715 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 869715 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 869715 ']' 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 [2024-07-25 00:07:31.335531] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:26.039 [2024-07-25 00:07:31.335620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.039 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.039 [2024-07-25 00:07:31.404230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.039 [2024-07-25 00:07:31.497229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.039 [2024-07-25 00:07:31.497285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.039 [2024-07-25 00:07:31.497302] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.039 [2024-07-25 00:07:31.497316] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.039 [2024-07-25 00:07:31.497327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.039 [2024-07-25 00:07:31.497432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.039 [2024-07-25 00:07:31.497518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.039 [2024-07-25 00:07:31.497587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.039 [2024-07-25 00:07:31.497583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 [2024-07-25 00:07:31.648955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.039 00:07:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.039 Malloc1 00:27:26.039 [2024-07-25 00:07:31.728011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.298 Malloc2 00:27:26.298 Malloc3 00:27:26.298 Malloc4 00:27:26.298 Malloc5 00:27:26.298 Malloc6 00:27:26.298 Malloc7 00:27:26.556 Malloc8 00:27:26.556 Malloc9 00:27:26.556 Malloc10 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=869893 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 869893 /var/tmp/bdevperf.sock 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 869893 ']' 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:26.556 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.557 { 00:27:26.557 "params": { 00:27:26.557 "name": "Nvme$subsystem", 00:27:26.557 "trtype": "$TEST_TRANSPORT", 00:27:26.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.557 "adrfam": "ipv4", 00:27:26.557 "trsvcid": "$NVMF_PORT", 00:27:26.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.557 "hdgst": ${hdgst:-false}, 00:27:26.557 "ddgst": ${ddgst:-false} 00:27:26.557 }, 00:27:26.557 "method": "bdev_nvme_attach_controller" 00:27:26.557 } 00:27:26.557 EOF 00:27:26.557 )") 00:27:26.557 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.558 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:26.558 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:26.558 00:07:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme1", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme2", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme3", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme4", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme5", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme6", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme7", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme8", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme9", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 },{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme10", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 }' 00:27:26.558 [2024-07-25 00:07:32.220428] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:26.558 [2024-07-25 00:07:32.220523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:26.558 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.816 [2024-07-25 00:07:32.284046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.816 [2024-07-25 00:07:32.372881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 869893 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:28.712 00:07:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:29.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 869893 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 869715 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.645 { 00:27:29.645 "params": { 00:27:29.645 "name": "Nvme$subsystem", 00:27:29.645 "trtype": "$TEST_TRANSPORT", 00:27:29.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.645 "adrfam": "ipv4", 00:27:29.645 "trsvcid": "$NVMF_PORT", 00:27:29.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.645 "hdgst": ${hdgst:-false}, 00:27:29.645 "ddgst": ${ddgst:-false} 00:27:29.645 }, 00:27:29.645 "method": "bdev_nvme_attach_controller" 00:27:29.645 } 00:27:29.645 EOF 00:27:29.645 )") 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.645 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.646 { 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme$subsystem", 00:27:29.646 "trtype": "$TEST_TRANSPORT", 00:27:29.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "$NVMF_PORT", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.646 "hdgst": ${hdgst:-false}, 00:27:29.646 "ddgst": ${ddgst:-false} 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 } 00:27:29.646 EOF 00:27:29.646 )") 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.646 { 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme$subsystem", 00:27:29.646 "trtype": "$TEST_TRANSPORT", 00:27:29.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "$NVMF_PORT", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.646 "hdgst": ${hdgst:-false}, 00:27:29.646 "ddgst": ${ddgst:-false} 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 } 00:27:29.646 EOF 00:27:29.646 )") 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.646 { 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme$subsystem", 00:27:29.646 "trtype": "$TEST_TRANSPORT", 00:27:29.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "$NVMF_PORT", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.646 "hdgst": ${hdgst:-false}, 00:27:29.646 "ddgst": ${ddgst:-false} 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 } 00:27:29.646 EOF 00:27:29.646 )") 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.646 { 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme$subsystem", 00:27:29.646 "trtype": "$TEST_TRANSPORT", 00:27:29.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "$NVMF_PORT", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.646 "hdgst": ${hdgst:-false}, 00:27:29.646 "ddgst": ${ddgst:-false} 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 } 00:27:29.646 EOF 00:27:29.646 )") 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:29.646 00:07:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme1", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme2", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme3", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme4", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme5", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme6", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme7", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme8", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme9", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 },{ 00:27:29.646 "params": { 00:27:29.646 "name": "Nvme10", 00:27:29.646 "trtype": "tcp", 00:27:29.646 "traddr": "10.0.0.2", 00:27:29.646 "adrfam": "ipv4", 00:27:29.646 "trsvcid": "4420", 00:27:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:29.646 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:29.646 "hdgst": false, 00:27:29.646 "ddgst": false 00:27:29.646 }, 00:27:29.646 "method": "bdev_nvme_attach_controller" 00:27:29.646 }' 00:27:29.646 [2024-07-25 00:07:35.241722] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:29.646 [2024-07-25 00:07:35.241795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870197 ] 00:27:29.646 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.646 [2024-07-25 00:07:35.307931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.904 [2024-07-25 00:07:35.399433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.800 Running I/O for 1 seconds... 00:27:32.735 00:27:32.735 Latency(us) 00:27:32.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.735 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme1n1 : 1.14 224.91 14.06 0.00 0.00 281127.82 19515.16 253211.69 00:27:32.735 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme2n1 : 1.15 222.32 13.89 0.00 0.00 278450.82 20000.62 260978.92 00:27:32.735 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme3n1 : 1.04 245.16 15.32 0.00 0.00 249093.88 15922.82 254765.13 00:27:32.735 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme4n1 : 1.14 223.61 13.98 0.00 0.00 269653.71 22427.88 279620.27 00:27:32.735 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme5n1 : 1.18 217.76 13.61 0.00 0.00 272787.15 25437.68 248551.35 00:27:32.735 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.735 Verification LBA range: start 0x0 length 0x400 00:27:32.735 Nvme6n1 : 1.17 219.40 13.71 0.00 0.00 265748.86 26020.22 256318.58 00:27:32.735 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.736 Verification LBA range: start 0x0 length 0x400 00:27:32.736 Nvme7n1 : 1.18 271.50 16.97 0.00 0.00 211497.42 18058.81 254765.13 00:27:32.736 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.736 Verification LBA range: start 0x0 length 0x400 00:27:32.736 Nvme8n1 : 1.18 270.38 16.90 0.00 0.00 209004.47 19418.07 250104.79 00:27:32.736 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.736 Verification LBA range: start 0x0 length 0x400 00:27:32.736 Nvme9n1 : 1.17 223.90 13.99 0.00 0.00 245113.10 9951.76 274959.93 00:27:32.736 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.736 Verification LBA range: start 0x0 length 0x400 00:27:32.736 Nvme10n1 : 1.17 222.86 13.93 0.00 0.00 244563.02 21942.42 282727.16 00:27:32.736 =================================================================================================================== 00:27:32.736 Total : 2341.80 146.36 0.00 0.00 250660.79 9951.76 282727.16 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.994 rmmod nvme_tcp 00:27:32.994 rmmod nvme_fabrics 00:27:32.994 rmmod nvme_keyring 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:32.994 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 869715 ']' 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 869715 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 869715 ']' 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 869715 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 869715 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 869715' 00:27:32.995 killing process with pid 869715 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 869715 00:27:32.995 00:07:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 869715 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.562 00:07:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.499 00:27:35.499 real 0m12.064s 00:27:35.499 user 0m34.540s 00:27:35.499 sys 0m3.463s 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.499 ************************************ 00:27:35.499 END TEST nvmf_shutdown_tc1 00:27:35.499 ************************************ 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.499 00:07:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.500 ************************************ 00:27:35.500 START TEST nvmf_shutdown_tc2 00:27:35.500 ************************************ 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:35.500 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:35.500 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:35.500 Found net devices under 0000:09:00.0: cvl_0_0 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:35.500 Found net devices under 0000:09:00.1: cvl_0_1 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.500 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:27:35.759 00:27:35.759 --- 10.0.0.2 ping statistics --- 00:27:35.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.759 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:35.759 00:27:35.759 --- 10.0.0.1 ping statistics --- 00:27:35.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.759 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=871082 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 871082 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 871082 ']' 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:35.759 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.759 [2024-07-25 00:07:41.416911] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:35.759 [2024-07-25 00:07:41.416979] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.759 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.018 [2024-07-25 00:07:41.480487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.018 [2024-07-25 00:07:41.570522] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.018 [2024-07-25 00:07:41.570569] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.018 [2024-07-25 00:07:41.570597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.018 [2024-07-25 00:07:41.570609] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.018 [2024-07-25 00:07:41.570620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.018 [2024-07-25 00:07:41.570749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.018 [2024-07-25 00:07:41.570813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.018 [2024-07-25 00:07:41.571005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:36.018 [2024-07-25 00:07:41.571008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.018 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:36.018 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:36.018 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.018 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.018 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 [2024-07-25 00:07:41.739029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.277 00:07:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 Malloc1 00:27:36.277 [2024-07-25 00:07:41.829236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.277 Malloc2 00:27:36.277 Malloc3 00:27:36.277 Malloc4 00:27:36.535 Malloc5 00:27:36.535 Malloc6 00:27:36.535 Malloc7 00:27:36.535 Malloc8 00:27:36.535 Malloc9 00:27:36.794 Malloc10 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=871259 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 871259 /var/tmp/bdevperf.sock 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 871259 ']' 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:36.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.794 "trsvcid": "$NVMF_PORT", 00:27:36.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.794 "hdgst": ${hdgst:-false}, 00:27:36.794 "ddgst": ${ddgst:-false} 00:27:36.794 }, 00:27:36.794 "method": "bdev_nvme_attach_controller" 00:27:36.794 } 00:27:36.794 EOF 00:27:36.794 )") 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.794 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.794 { 00:27:36.794 "params": { 00:27:36.794 "name": "Nvme$subsystem", 00:27:36.794 "trtype": "$TEST_TRANSPORT", 00:27:36.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.794 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "$NVMF_PORT", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.795 "hdgst": ${hdgst:-false}, 00:27:36.795 "ddgst": ${ddgst:-false} 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 } 00:27:36.795 EOF 00:27:36.795 )") 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.795 { 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme$subsystem", 00:27:36.795 "trtype": "$TEST_TRANSPORT", 00:27:36.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "$NVMF_PORT", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.795 "hdgst": ${hdgst:-false}, 00:27:36.795 "ddgst": ${ddgst:-false} 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 } 00:27:36.795 EOF 00:27:36.795 )") 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.795 { 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme$subsystem", 00:27:36.795 "trtype": "$TEST_TRANSPORT", 00:27:36.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "$NVMF_PORT", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.795 "hdgst": ${hdgst:-false}, 00:27:36.795 "ddgst": ${ddgst:-false} 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 } 00:27:36.795 EOF 00:27:36.795 )") 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.795 { 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme$subsystem", 00:27:36.795 "trtype": "$TEST_TRANSPORT", 00:27:36.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "$NVMF_PORT", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.795 "hdgst": ${hdgst:-false}, 00:27:36.795 "ddgst": ${ddgst:-false} 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 } 00:27:36.795 EOF 00:27:36.795 )") 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:36.795 00:07:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme1", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme2", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme3", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme4", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme5", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme6", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme7", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme8", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme9", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 },{ 00:27:36.795 "params": { 00:27:36.795 "name": "Nvme10", 00:27:36.795 "trtype": "tcp", 00:27:36.795 "traddr": "10.0.0.2", 00:27:36.795 "adrfam": "ipv4", 00:27:36.795 "trsvcid": "4420", 00:27:36.795 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:36.795 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:36.795 "hdgst": false, 00:27:36.795 "ddgst": false 00:27:36.795 }, 00:27:36.795 "method": "bdev_nvme_attach_controller" 00:27:36.795 }' 00:27:36.795 [2024-07-25 00:07:42.352855] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:36.795 [2024-07-25 00:07:42.352930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871259 ] 00:27:36.795 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.795 [2024-07-25 00:07:42.414680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.795 [2024-07-25 00:07:42.502725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.695 Running I/O for 10 seconds... 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:38.695 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:38.953 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.210 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 871259 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 871259 ']' 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 871259 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 871259 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 871259' 00:27:39.468 killing process with pid 871259 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 871259 00:27:39.468 00:07:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 871259 00:27:39.468 Received shutdown signal, test time was about 0.990732 seconds 00:27:39.468 00:27:39.468 Latency(us) 00:27:39.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.468 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme1n1 : 0.96 200.61 12.54 0.00 0.00 315449.14 25437.68 267192.70 00:27:39.468 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme2n1 : 0.98 267.19 16.70 0.00 0.00 231235.17 6140.97 257872.02 00:27:39.468 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme3n1 : 0.99 258.62 16.16 0.00 0.00 235418.55 21554.06 254765.13 00:27:39.468 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme4n1 : 0.97 264.94 16.56 0.00 0.00 224690.63 17767.54 257872.02 00:27:39.468 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme5n1 : 0.97 197.66 12.35 0.00 0.00 295471.79 23398.78 290494.39 00:27:39.468 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme6n1 : 0.99 259.65 16.23 0.00 0.00 220679.96 19515.16 270299.59 00:27:39.468 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.468 Nvme7n1 : 0.96 200.30 12.52 0.00 0.00 278004.37 21262.79 256318.58 00:27:39.468 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.468 Verification LBA range: start 0x0 length 0x400 00:27:39.469 Nvme8n1 : 0.94 212.14 13.26 0.00 0.00 254026.70 4878.79 257872.02 00:27:39.469 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.469 Verification LBA range: start 0x0 length 0x400 00:27:39.469 Nvme9n1 : 0.98 196.17 12.26 0.00 0.00 274290.16 23107.51 313796.08 00:27:39.469 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.469 Verification LBA range: start 0x0 length 0x400 00:27:39.469 Nvme10n1 : 0.95 201.14 12.57 0.00 0.00 258750.39 22622.06 237677.23 00:27:39.469 =================================================================================================================== 00:27:39.469 Total : 2258.43 141.15 0.00 0.00 255108.87 4878.79 313796.08 00:27:39.726 00:07:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 871082 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.657 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.657 rmmod nvme_tcp 00:27:40.657 rmmod nvme_fabrics 00:27:40.657 rmmod nvme_keyring 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 871082 ']' 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 871082 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 871082 ']' 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 871082 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 871082 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 871082' 00:27:40.915 killing process with pid 871082 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 871082 00:27:40.915 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 871082 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.482 00:07:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.387 00:27:43.387 real 0m7.795s 00:27:43.387 user 0m23.360s 00:27:43.387 sys 0m1.620s 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.387 ************************************ 00:27:43.387 END TEST nvmf_shutdown_tc2 00:27:43.387 ************************************ 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.387 00:07:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.387 ************************************ 00:27:43.387 START TEST nvmf_shutdown_tc3 00:27:43.387 ************************************ 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.387 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:43.388 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:43.388 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:43.388 Found net devices under 0000:09:00.0: cvl_0_0 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:43.388 Found net devices under 0000:09:00.1: cvl_0_1 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.388 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:27:43.647 00:27:43.647 --- 10.0.0.2 ping statistics --- 00:27:43.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.647 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:27:43.647 00:27:43.647 --- 10.0.0.1 ping statistics --- 00:27:43.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.647 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=872173 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 872173 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 872173 ']' 00:27:43.647 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.648 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.648 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.648 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.648 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.648 [2024-07-25 00:07:49.268985] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:43.648 [2024-07-25 00:07:49.269062] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.648 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.648 [2024-07-25 00:07:49.331035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.906 [2024-07-25 00:07:49.420347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.906 [2024-07-25 00:07:49.420396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.906 [2024-07-25 00:07:49.420425] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.906 [2024-07-25 00:07:49.420437] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.906 [2024-07-25 00:07:49.420447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.906 [2024-07-25 00:07:49.420528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.906 [2024-07-25 00:07:49.420592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.906 [2024-07-25 00:07:49.420642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.906 [2024-07-25 00:07:49.420645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 [2024-07-25 00:07:49.572855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.906 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.907 00:07:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.165 Malloc1 00:27:44.165 [2024-07-25 00:07:49.652649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.165 Malloc2 00:27:44.165 Malloc3 00:27:44.165 Malloc4 00:27:44.165 Malloc5 00:27:44.165 Malloc6 00:27:44.423 Malloc7 00:27:44.423 Malloc8 00:27:44.423 Malloc9 00:27:44.423 Malloc10 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=872250 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 872250 /var/tmp/bdevperf.sock 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 872250 ']' 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.423 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.423 { 00:27:44.423 "params": { 00:27:44.423 "name": "Nvme$subsystem", 00:27:44.423 "trtype": "$TEST_TRANSPORT", 00:27:44.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.423 "adrfam": "ipv4", 00:27:44.424 "trsvcid": "$NVMF_PORT", 00:27:44.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.424 "hdgst": ${hdgst:-false}, 00:27:44.424 "ddgst": ${ddgst:-false} 00:27:44.424 }, 00:27:44.424 "method": "bdev_nvme_attach_controller" 00:27:44.424 } 00:27:44.424 EOF 00:27:44.424 )") 00:27:44.424 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.424 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.424 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.424 { 00:27:44.424 "params": { 00:27:44.424 "name": "Nvme$subsystem", 00:27:44.424 "trtype": "$TEST_TRANSPORT", 00:27:44.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.424 "adrfam": "ipv4", 00:27:44.424 "trsvcid": "$NVMF_PORT", 00:27:44.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.424 "hdgst": ${hdgst:-false}, 00:27:44.424 "ddgst": ${ddgst:-false} 00:27:44.424 }, 00:27:44.424 "method": "bdev_nvme_attach_controller" 00:27:44.424 } 00:27:44.424 EOF 00:27:44.424 )") 00:27:44.424 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.683 { 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme$subsystem", 00:27:44.683 "trtype": "$TEST_TRANSPORT", 00:27:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "$NVMF_PORT", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.683 "hdgst": ${hdgst:-false}, 00:27:44.683 "ddgst": ${ddgst:-false} 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 } 00:27:44.683 EOF 00:27:44.683 )") 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:44.683 00:07:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme1", 00:27:44.683 "trtype": "tcp", 00:27:44.683 "traddr": "10.0.0.2", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "4420", 00:27:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.683 "hdgst": false, 00:27:44.683 "ddgst": false 00:27:44.683 }, 00:27:44.683 "method": "bdev_nvme_attach_controller" 00:27:44.683 },{ 00:27:44.683 "params": { 00:27:44.683 "name": "Nvme2", 00:27:44.683 "trtype": "tcp", 00:27:44.683 "traddr": "10.0.0.2", 00:27:44.683 "adrfam": "ipv4", 00:27:44.683 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme3", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme4", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme5", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme6", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme7", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme8", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme9", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 },{ 00:27:44.684 "params": { 00:27:44.684 "name": "Nvme10", 00:27:44.684 "trtype": "tcp", 00:27:44.684 "traddr": "10.0.0.2", 00:27:44.684 "adrfam": "ipv4", 00:27:44.684 "trsvcid": "4420", 00:27:44.684 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:44.684 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:44.684 "hdgst": false, 00:27:44.684 "ddgst": false 00:27:44.684 }, 00:27:44.684 "method": "bdev_nvme_attach_controller" 00:27:44.684 }' 00:27:44.684 [2024-07-25 00:07:50.176439] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:44.684 [2024-07-25 00:07:50.176517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872250 ] 00:27:44.684 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.684 [2024-07-25 00:07:50.241617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.684 [2024-07-25 00:07:50.331209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.584 Running I/O for 10 seconds... 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:46.584 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:46.842 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 872173 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 872173 ']' 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 872173 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 872173 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 872173' 00:27:47.101 killing process with pid 872173 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 872173 00:27:47.101 00:07:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 872173 00:27:47.101 [2024-07-25 00:07:52.815108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.101 [2024-07-25 00:07:52.815320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.102 [2024-07-25 00:07:52.815966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27eba90 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.367 [2024-07-25 00:07:52.817821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.817992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.818566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7d40 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.820997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.368 [2024-07-25 00:07:52.821074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821397] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.369 [2024-07-25 00:07:52.821413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.821457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ebf30 is same with the state(5) to be set 00:27:47.369 [2024-07-25 00:07:52.822370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.822971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.822986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.369 [2024-07-25 00:07:52.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.369 [2024-07-25 00:07:52.823233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.823975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.823991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.370 [2024-07-25 00:07:52.824179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.370 [2024-07-25 00:07:52.824194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.371 [2024-07-25 00:07:52.824475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.371 [2024-07-25 00:07:52.824490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e4050 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.824601] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15e4050 was disconnected and freed. reset controller. 00:27:47.371 [2024-07-25 00:07:52.829507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.371 [2024-07-25 00:07:52.829980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.829992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.830250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ec3d0 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.831983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.372 [2024-07-25 00:07:52.832247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.832622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ecd50 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.373 [2024-07-25 00:07:52.833962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.833974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.833987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.834631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6ac0 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.374 [2024-07-25 00:07:52.836186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.836903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6f60 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.838015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.375 [2024-07-25 00:07:52.838051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.376 [2024-07-25 00:07:52.838797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.838808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.838820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.838831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7400 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.839993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.377 [2024-07-25 00:07:52.840465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.840476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f78a0 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.843986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc7e40 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.844204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffd0 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.844376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ef50 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.844583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a44770 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.844757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.844919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f610 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.844971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.844993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbc7a0 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.845163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0df0 is same with the state(5) to be set 00:27:47.378 [2024-07-25 00:07:52.845329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.378 [2024-07-25 00:07:52.845447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.378 [2024-07-25 00:07:52.845460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20a60 is same with the state(5) to be set 00:27:47.379 [2024-07-25 00:07:52.845505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19400 is same with the state(5) to be set 00:27:47.379 [2024-07-25 00:07:52.845716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.379 [2024-07-25 00:07:52.845831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.845845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18700 is same with the state(5) to be set 00:27:47.379 [2024-07-25 00:07:52.847291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.847984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.847999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.848016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.848032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.848048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.848064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.848081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.379 [2024-07-25 00:07:52.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.379 [2024-07-25 00:07:52.848123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.848976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.848991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-07-25 00:07:52.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-07-25 00:07:52.849347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.849362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.849383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.849398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.849414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.849428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.849445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.849460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.849476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.849491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.849506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96490 is same with the state(5) to be set 00:27:47.381 [2024-07-25 00:07:52.849600] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b96490 was disconnected and freed. reset controller. 00:27:47.381 [2024-07-25 00:07:52.850053] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.850092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.381 [2024-07-25 00:07:52.850145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e0df0 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.851814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:47.381 [2024-07-25 00:07:52.851914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18700 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.852922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-25 00:07:52.852955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e0df0 with addr=10.0.0.2, port=4420 00:27:47.381 [2024-07-25 00:07:52.852974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0df0 is same with the state(5) to be set 00:27:47.381 [2024-07-25 00:07:52.853075] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.853599] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.853675] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.853741] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.853812] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.853882] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:47.381 [2024-07-25 00:07:52.854036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-25 00:07:52.854062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a18700 with addr=10.0.0.2, port=4420 00:27:47.381 [2024-07-25 00:07:52.854079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18700 is same with the state(5) to be set 00:27:47.381 [2024-07-25 00:07:52.854099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e0df0 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc7e40 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbffd0 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3ef50 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a44770 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150f610 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbc7a0 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a20a60 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19400 (9): Bad file descriptor 00:27:47.381 [2024-07-25 00:07:52.854466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.854984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.855015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.865164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.865182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.865199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.865216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.865233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.865249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-07-25 00:07:52.865266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-07-25 00:07:52.865281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.865975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-07-25 00:07:52.866485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-07-25 00:07:52.866499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.866533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.866565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.866596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.866627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.866657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.866797] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19eace0 was disconnected and freed. reset controller. 00:27:47.383 [2024-07-25 00:07:52.867038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18700 (9): Bad file descriptor 00:27:47.383 [2024-07-25 00:07:52.867070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.383 [2024-07-25 00:07:52.867086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.383 [2024-07-25 00:07:52.867114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.383 [2024-07-25 00:07:52.867180] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.383 [2024-07-25 00:07:52.867204] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.383 [2024-07-25 00:07:52.867259] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.383 [2024-07-25 00:07:52.868498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.383 [2024-07-25 00:07:52.868549] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:47.383 [2024-07-25 00:07:52.868565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:47.383 [2024-07-25 00:07:52.868580] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:47.383 [2024-07-25 00:07:52.868671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.868973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-07-25 00:07:52.869530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-07-25 00:07:52.869547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.869969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.869986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-07-25 00:07:52.870562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.384 [2024-07-25 00:07:52.870576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.870592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.870607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.870623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.870638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.870654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.870670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.870686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.870701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.870716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b979b0 is same with the state(5) to be set 00:27:47.385 [2024-07-25 00:07:52.871970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.872975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-07-25 00:07:52.872991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-07-25 00:07:52.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.873976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.873991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.874006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.874021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.874041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98ed0 is same with the state(5) to be set 00:27:47.386 [2024-07-25 00:07:52.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.875336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.875368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.875384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.875401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.875416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.875432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-07-25 00:07:52.875447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-07-25 00:07:52.875463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.875996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-07-25 00:07:52.876666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-07-25 00:07:52.876683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.876980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.876996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.877375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.877397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a3f0 is same with the state(5) to be set 00:27:47.388 [2024-07-25 00:07:52.878638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-07-25 00:07:52.878920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-07-25 00:07:52.878936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.878951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.878972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.878987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.879980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.879996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.880011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.880027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.880042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.880058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.880072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.880088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.880109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.880127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-07-25 00:07:52.880142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-07-25 00:07:52.880158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.880460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.880476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.887783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.887853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.887870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.887886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.887901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.887918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.887945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.887962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.887977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.887993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.888008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.888024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.888039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.888055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ebbc0 is same with the state(5) to be set 00:27:47.390 [2024-07-25 00:07:52.889433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.889979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.889995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.890010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.890041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-07-25 00:07:52.890057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-07-25 00:07:52.890072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.890973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.890989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-07-25 00:07:52.891278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-07-25 00:07:52.891292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.891485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.891499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f4f90 is same with the state(5) to be set 00:27:47.392 [2024-07-25 00:07:52.893601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-07-25 00:07:52.894560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-07-25 00:07:52.894577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.393 [2024-07-25 00:07:52.895562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-07-25 00:07:52.895578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.895593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.895609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.895624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.895640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acd530 is same with the state(5) to be set 00:27:47.394 [2024-07-25 00:07:52.897455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:47.394 [2024-07-25 00:07:52.897490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.394 [2024-07-25 00:07:52.897508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:47.394 [2024-07-25 00:07:52.897524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:47.394 [2024-07-25 00:07:52.897619] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.394 [2024-07-25 00:07:52.897645] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.394 [2024-07-25 00:07:52.897669] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.394 [2024-07-25 00:07:52.897688] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.394 [2024-07-25 00:07:52.897708] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.394 [2024-07-25 00:07:52.897767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.897971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.897986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.394 [2024-07-25 00:07:52.898769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-07-25 00:07:52.898784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.898970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.898986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-07-25 00:07:52.899797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.395 [2024-07-25 00:07:52.899814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5330 is same with the state(5) to be set 00:27:47.395 [2024-07-25 00:07:52.901366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:47.395 [2024-07-25 00:07:52.901396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:47.395 [2024-07-25 00:07:52.901415] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:47.395 [2024-07-25 00:07:52.901431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:47.395 [2024-07-25 00:07:52.901700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.395 [2024-07-25 00:07:52.901730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a20a60 with addr=10.0.0.2, port=4420 00:27:47.395 [2024-07-25 00:07:52.901747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20a60 is same with the state(5) to be set 00:27:47.395 [2024-07-25 00:07:52.901892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.395 [2024-07-25 00:07:52.901916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3ef50 with addr=10.0.0.2, port=4420 00:27:47.395 [2024-07-25 00:07:52.901932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ef50 is same with the state(5) to be set 00:27:47.395 [2024-07-25 00:07:52.902064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.395 [2024-07-25 00:07:52.902087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a44770 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.902110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a44770 is same with the state(5) to be set 00:27:47.396 task offset: 16896 on job bdev=Nvme1n1 fails 00:27:47.396 00:27:47.396 Latency(us) 00:27:47.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.396 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme1n1 ended in about 0.92 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme1n1 : 0.92 143.40 8.96 69.53 0.00 297267.43 21748.24 279620.27 00:27:47.396 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme2n1 ended in about 0.97 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme2n1 : 0.97 131.37 8.21 65.69 0.00 315267.86 20291.89 287387.50 00:27:47.396 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme3n1 ended in about 0.94 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme3n1 : 0.94 135.90 8.49 67.95 0.00 298248.22 14272.28 346418.44 00:27:47.396 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme4n1 ended in about 0.92 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme4n1 : 0.92 207.58 12.97 69.19 0.00 214728.91 7427.41 288940.94 00:27:47.396 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme5n1 ended in about 0.95 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme5n1 : 0.95 135.41 8.46 67.70 0.00 287074.99 21262.79 316902.97 00:27:47.396 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme6n1 ended in about 0.95 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme6n1 : 0.95 139.15 8.70 67.47 0.00 276407.86 26408.58 295154.73 00:27:47.396 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme7n1 ended in about 0.95 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme7n1 : 0.95 134.46 8.40 67.23 0.00 277511.59 36505.98 257872.02 00:27:47.396 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme8n1 ended in about 0.96 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme8n1 : 0.96 132.97 8.31 66.48 0.00 275197.09 20000.62 285834.05 00:27:47.396 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme9n1 ended in about 0.97 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme9n1 : 0.97 136.65 8.54 66.25 0.00 264852.13 21651.15 284280.60 00:27:47.396 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.396 Job: Nvme10n1 ended in about 0.97 seconds with error 00:27:47.396 Verification LBA range: start 0x0 length 0x400 00:27:47.396 Nvme10n1 : 0.97 131.94 8.25 65.97 0.00 265954.04 22330.79 298261.62 00:27:47.396 =================================================================================================================== 00:27:47.396 Total : 1428.83 89.30 673.47 0.00 275259.87 7427.41 346418.44 00:27:47.396 [2024-07-25 00:07:52.931315] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:47.396 [2024-07-25 00:07:52.931404] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:47.396 [2024-07-25 00:07:52.931733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.931770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150f610 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.931792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f610 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.931931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.931958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbffd0 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.931975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffd0 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.932230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.932259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc7e40 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.932276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc7e40 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.932402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.932428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbc7a0 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.932443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbc7a0 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.932472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a20a60 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.932497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3ef50 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.932516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a44770 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.933276] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.396 [2024-07-25 00:07:52.933310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:47.396 [2024-07-25 00:07:52.933536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.933566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a19400 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.933595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19400 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.933615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150f610 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.933634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbffd0 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.933652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc7e40 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.933670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbc7a0 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.933689] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:47.396 [2024-07-25 00:07:52.933703] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:47.396 [2024-07-25 00:07:52.933721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:47.396 [2024-07-25 00:07:52.933743] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:47.396 [2024-07-25 00:07:52.933758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:47.396 [2024-07-25 00:07:52.933771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:47.396 [2024-07-25 00:07:52.933789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:47.396 [2024-07-25 00:07:52.933802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:47.396 [2024-07-25 00:07:52.933816] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:47.396 [2024-07-25 00:07:52.933848] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933874] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933893] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933925] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933949] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933969] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.933987] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.396 [2024-07-25 00:07:52.934090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.396 [2024-07-25 00:07:52.934120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.396 [2024-07-25 00:07:52.934134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.396 [2024-07-25 00:07:52.934266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.934293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e0df0 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.934309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0df0 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.934452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.396 [2024-07-25 00:07:52.934477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a18700 with addr=10.0.0.2, port=4420 00:27:47.396 [2024-07-25 00:07:52.934493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18700 is same with the state(5) to be set 00:27:47.396 [2024-07-25 00:07:52.934518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19400 (9): Bad file descriptor 00:27:47.396 [2024-07-25 00:07:52.934536] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:47.396 [2024-07-25 00:07:52.934549] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:47.396 [2024-07-25 00:07:52.934563] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:47.396 [2024-07-25 00:07:52.934582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.934610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:47.397 [2024-07-25 00:07:52.934625] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.934654] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:47.397 [2024-07-25 00:07:52.934670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.934697] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:47.397 [2024-07-25 00:07:52.934752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.934773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.934788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.934800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.934816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e0df0 (9): Bad file descriptor 00:27:47.397 [2024-07-25 00:07:52.934836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18700 (9): Bad file descriptor 00:27:47.397 [2024-07-25 00:07:52.934852] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.934880] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:47.397 [2024-07-25 00:07:52.934921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.934939] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934954] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.934967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.397 [2024-07-25 00:07:52.934983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:47.397 [2024-07-25 00:07:52.934998] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:47.397 [2024-07-25 00:07:52.935011] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:47.397 [2024-07-25 00:07:52.935047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.397 [2024-07-25 00:07:52.935068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.962 00:07:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:47.962 00:07:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 872250 00:27:48.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (872250) - No such process 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.899 rmmod nvme_tcp 00:27:48.899 rmmod nvme_fabrics 00:27:48.899 rmmod nvme_keyring 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.899 00:07:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.803 00:27:50.803 real 0m7.469s 00:27:50.803 user 0m17.970s 00:27:50.803 sys 0m1.525s 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.803 ************************************ 00:27:50.803 END TEST nvmf_shutdown_tc3 00:27:50.803 ************************************ 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:50.803 00:27:50.803 real 0m27.542s 00:27:50.803 user 1m15.960s 00:27:50.803 sys 0m6.744s 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.803 00:07:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.803 ************************************ 00:27:50.803 END TEST nvmf_shutdown 00:27:50.803 ************************************ 00:27:51.062 00:07:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:51.062 00:07:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:51.062 00:07:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:51.062 00:07:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:51.062 00:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:51.062 ************************************ 00:27:51.062 START TEST nvmf_multicontroller 00:27:51.062 ************************************ 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:51.062 * Looking for test storage... 00:27:51.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.062 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.063 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.063 00:07:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.063 00:07:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.591 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:53.592 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:53.592 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:53.592 Found net devices under 0000:09:00.0: cvl_0_0 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:53.592 Found net devices under 0000:09:00.1: cvl_0_1 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:53.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:27:53.592 00:27:53.592 --- 10.0.0.2 ping statistics --- 00:27:53.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.592 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:27:53.592 00:27:53.592 --- 10.0.0.1 ping statistics --- 00:27:53.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.592 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=874751 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 874751 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 874751 ']' 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.592 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.593 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.593 00:07:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 [2024-07-25 00:07:58.928029] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:53.593 [2024-07-25 00:07:58.928112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.593 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.593 [2024-07-25 00:07:58.992756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.593 [2024-07-25 00:07:59.082374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.593 [2024-07-25 00:07:59.082427] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.593 [2024-07-25 00:07:59.082440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.593 [2024-07-25 00:07:59.082450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.593 [2024-07-25 00:07:59.082460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.593 [2024-07-25 00:07:59.082543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.593 [2024-07-25 00:07:59.082605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.593 [2024-07-25 00:07:59.082607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 [2024-07-25 00:07:59.226803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 Malloc0 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 [2024-07-25 00:07:59.294567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.593 [2024-07-25 00:07:59.302464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.593 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.851 Malloc1 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=874899 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 874899 /var/tmp/bdevperf.sock 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 874899 ']' 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.851 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 NVMe0n1 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.110 1 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.110 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 request: 00:27:54.110 { 00:27:54.110 "name": "NVMe0", 00:27:54.110 "trtype": "tcp", 00:27:54.110 "traddr": "10.0.0.2", 00:27:54.110 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:54.110 "hostaddr": "10.0.0.2", 00:27:54.110 "hostsvcid": "60000", 00:27:54.110 "adrfam": "ipv4", 00:27:54.111 "trsvcid": "4420", 00:27:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.111 "method": "bdev_nvme_attach_controller", 00:27:54.111 "req_id": 1 00:27:54.111 } 00:27:54.111 Got JSON-RPC error response 00:27:54.111 response: 00:27:54.111 { 00:27:54.111 "code": -114, 00:27:54.111 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:54.111 } 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.111 request: 00:27:54.111 { 00:27:54.111 "name": "NVMe0", 00:27:54.111 "trtype": "tcp", 00:27:54.111 "traddr": "10.0.0.2", 00:27:54.111 "hostaddr": "10.0.0.2", 00:27:54.111 "hostsvcid": "60000", 00:27:54.111 "adrfam": "ipv4", 00:27:54.111 "trsvcid": "4420", 00:27:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.111 "method": "bdev_nvme_attach_controller", 00:27:54.111 "req_id": 1 00:27:54.111 } 00:27:54.111 Got JSON-RPC error response 00:27:54.111 response: 00:27:54.111 { 00:27:54.111 "code": -114, 00:27:54.111 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:54.111 } 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.111 request: 00:27:54.111 { 00:27:54.111 "name": "NVMe0", 00:27:54.111 "trtype": "tcp", 00:27:54.111 "traddr": "10.0.0.2", 00:27:54.111 "hostaddr": "10.0.0.2", 00:27:54.111 "hostsvcid": "60000", 00:27:54.111 "adrfam": "ipv4", 00:27:54.111 "trsvcid": "4420", 00:27:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.111 "multipath": "disable", 00:27:54.111 "method": "bdev_nvme_attach_controller", 00:27:54.111 "req_id": 1 00:27:54.111 } 00:27:54.111 Got JSON-RPC error response 00:27:54.111 response: 00:27:54.111 { 00:27:54.111 "code": -114, 00:27:54.111 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:54.111 } 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.403 request: 00:27:54.403 { 00:27:54.403 "name": "NVMe0", 00:27:54.403 "trtype": "tcp", 00:27:54.403 "traddr": "10.0.0.2", 00:27:54.403 "hostaddr": "10.0.0.2", 00:27:54.403 "hostsvcid": "60000", 00:27:54.403 "adrfam": "ipv4", 00:27:54.403 "trsvcid": "4420", 00:27:54.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.403 "multipath": "failover", 00:27:54.403 "method": "bdev_nvme_attach_controller", 00:27:54.403 "req_id": 1 00:27:54.403 } 00:27:54.403 Got JSON-RPC error response 00:27:54.403 response: 00:27:54.403 { 00:27:54.403 "code": -114, 00:27:54.403 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:54.403 } 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.403 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.403 00:07:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.403 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:54.403 00:08:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.774 0 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 874899 ']' 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 874899' 00:27:55.774 killing process with pid 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 874899 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:27:55.774 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:27:55.774 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.774 [2024-07-25 00:07:59.405712] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:55.774 [2024-07-25 00:07:59.405809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874899 ] 00:27:55.774 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.774 [2024-07-25 00:07:59.465380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.774 [2024-07-25 00:07:59.556749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.774 [2024-07-25 00:08:00.052751] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 0fdc1e70-0985-42a6-81b5-2a73a4d0430d already exists 00:27:55.774 [2024-07-25 00:08:00.052807] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:0fdc1e70-0985-42a6-81b5-2a73a4d0430d alias for bdev NVMe1n1 00:27:55.774 [2024-07-25 00:08:00.052825] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:55.774 Running I/O for 1 seconds... 00:27:55.774 00:27:55.774 Latency(us) 00:27:55.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.774 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:55.774 NVMe0n1 : 1.01 16991.57 66.37 0.00 0.00 7500.24 7039.05 17185.00 00:27:55.774 =================================================================================================================== 00:27:55.775 Total : 16991.57 66.37 0.00 0.00 7500.24 7039.05 17185.00 00:27:55.775 Received shutdown signal, test time was about 1.000000 seconds 00:27:55.775 00:27:55.775 Latency(us) 00:27:55.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.775 =================================================================================================================== 00:27:55.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.775 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.775 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.033 rmmod nvme_tcp 00:27:56.033 rmmod nvme_fabrics 00:27:56.033 rmmod nvme_keyring 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 874751 ']' 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 874751 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 874751 ']' 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 874751 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 874751 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 874751' 00:27:56.033 killing process with pid 874751 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 874751 00:27:56.033 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 874751 00:27:56.291 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.291 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.291 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.291 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.291 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.292 00:08:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.292 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.292 00:08:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.193 00:08:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.193 00:27:58.193 real 0m7.279s 00:27:58.193 user 0m10.955s 00:27:58.193 sys 0m2.335s 00:27:58.193 00:08:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:58.193 00:08:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.193 ************************************ 00:27:58.193 END TEST nvmf_multicontroller 00:27:58.193 ************************************ 00:27:58.193 00:08:03 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.193 00:08:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:58.193 00:08:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:58.193 00:08:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:58.193 ************************************ 00:27:58.193 START TEST nvmf_aer 00:27:58.193 ************************************ 00:27:58.193 00:08:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.451 * Looking for test storage... 00:27:58.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.451 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.452 00:08:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:00.351 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:00.351 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.351 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:00.352 Found net devices under 0000:09:00.0: cvl_0_0 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:00.352 Found net devices under 0000:09:00.1: cvl_0_1 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:00.352 00:28:00.352 --- 10.0.0.2 ping statistics --- 00:28:00.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.352 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:28:00.352 00:28:00.352 --- 10.0.0.1 ping statistics --- 00:28:00.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.352 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=876986 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 876986 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 876986 ']' 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:00.352 00:08:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.352 [2024-07-25 00:08:06.027031] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:00.352 [2024-07-25 00:08:06.027121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.352 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.611 [2024-07-25 00:08:06.092988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.611 [2024-07-25 00:08:06.179864] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.611 [2024-07-25 00:08:06.179939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.611 [2024-07-25 00:08:06.179968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.611 [2024-07-25 00:08:06.179979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.612 [2024-07-25 00:08:06.179989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.612 [2024-07-25 00:08:06.180042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.612 [2024-07-25 00:08:06.180133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.612 [2024-07-25 00:08:06.180166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.612 [2024-07-25 00:08:06.180168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.612 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:00.612 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:00.612 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.612 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.612 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 [2024-07-25 00:08:06.341987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 Malloc0 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 [2024-07-25 00:08:06.396257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.870 [ 00:28:00.870 { 00:28:00.870 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:00.870 "subtype": "Discovery", 00:28:00.870 "listen_addresses": [], 00:28:00.870 "allow_any_host": true, 00:28:00.870 "hosts": [] 00:28:00.870 }, 00:28:00.870 { 00:28:00.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.870 "subtype": "NVMe", 00:28:00.870 "listen_addresses": [ 00:28:00.870 { 00:28:00.870 "trtype": "TCP", 00:28:00.870 "adrfam": "IPv4", 00:28:00.870 "traddr": "10.0.0.2", 00:28:00.870 "trsvcid": "4420" 00:28:00.870 } 00:28:00.870 ], 00:28:00.870 "allow_any_host": true, 00:28:00.870 "hosts": [], 00:28:00.870 "serial_number": "SPDK00000000000001", 00:28:00.870 "model_number": "SPDK bdev Controller", 00:28:00.870 "max_namespaces": 2, 00:28:00.870 "min_cntlid": 1, 00:28:00.870 "max_cntlid": 65519, 00:28:00.870 "namespaces": [ 00:28:00.870 { 00:28:00.870 "nsid": 1, 00:28:00.870 "bdev_name": "Malloc0", 00:28:00.870 "name": "Malloc0", 00:28:00.870 "nguid": "AD37B568ECD142F780652E0E210F3E45", 00:28:00.870 "uuid": "ad37b568-ecd1-42f7-8065-2e0e210f3e45" 00:28:00.870 } 00:28:00.870 ] 00:28:00.870 } 00:28:00.870 ] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=877130 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:00.870 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:00.870 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.129 Malloc1 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.129 Asynchronous Event Request test 00:28:01.129 Attaching to 10.0.0.2 00:28:01.129 Attached to 10.0.0.2 00:28:01.129 Registering asynchronous event callbacks... 00:28:01.129 Starting namespace attribute notice tests for all controllers... 00:28:01.129 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:01.129 aer_cb - Changed Namespace 00:28:01.129 Cleaning up... 00:28:01.129 [ 00:28:01.129 { 00:28:01.129 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:01.129 "subtype": "Discovery", 00:28:01.129 "listen_addresses": [], 00:28:01.129 "allow_any_host": true, 00:28:01.129 "hosts": [] 00:28:01.129 }, 00:28:01.129 { 00:28:01.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.129 "subtype": "NVMe", 00:28:01.129 "listen_addresses": [ 00:28:01.129 { 00:28:01.129 "trtype": "TCP", 00:28:01.129 "adrfam": "IPv4", 00:28:01.129 "traddr": "10.0.0.2", 00:28:01.129 "trsvcid": "4420" 00:28:01.129 } 00:28:01.129 ], 00:28:01.129 "allow_any_host": true, 00:28:01.129 "hosts": [], 00:28:01.129 "serial_number": "SPDK00000000000001", 00:28:01.129 "model_number": "SPDK bdev Controller", 00:28:01.129 "max_namespaces": 2, 00:28:01.129 "min_cntlid": 1, 00:28:01.129 "max_cntlid": 65519, 00:28:01.129 "namespaces": [ 00:28:01.129 { 00:28:01.129 "nsid": 1, 00:28:01.129 "bdev_name": "Malloc0", 00:28:01.129 "name": "Malloc0", 00:28:01.129 "nguid": "AD37B568ECD142F780652E0E210F3E45", 00:28:01.129 "uuid": "ad37b568-ecd1-42f7-8065-2e0e210f3e45" 00:28:01.129 }, 00:28:01.129 { 00:28:01.129 "nsid": 2, 00:28:01.129 "bdev_name": "Malloc1", 00:28:01.129 "name": "Malloc1", 00:28:01.129 "nguid": "135801F4ACD04C608DB4579BCA94FF50", 00:28:01.129 "uuid": "135801f4-acd0-4c60-8db4-579bca94ff50" 00:28:01.129 } 00:28:01.129 ] 00:28:01.129 } 00:28:01.129 ] 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 877130 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.129 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.387 rmmod nvme_tcp 00:28:01.387 rmmod nvme_fabrics 00:28:01.387 rmmod nvme_keyring 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 876986 ']' 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 876986 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 876986 ']' 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 876986 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 876986 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 876986' 00:28:01.387 killing process with pid 876986 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 876986 00:28:01.387 00:08:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 876986 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.645 00:08:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.551 00:08:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.551 00:28:03.551 real 0m5.310s 00:28:03.551 user 0m4.564s 00:28:03.551 sys 0m1.834s 00:28:03.551 00:08:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.551 00:08:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.551 ************************************ 00:28:03.551 END TEST nvmf_aer 00:28:03.551 ************************************ 00:28:03.551 00:08:09 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:03.551 00:08:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.551 00:08:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.551 00:08:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.810 ************************************ 00:28:03.810 START TEST nvmf_async_init 00:28:03.810 ************************************ 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:03.810 * Looking for test storage... 00:28:03.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0819c973909249e296203e01cbd7f262 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.810 00:08:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:05.711 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:05.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.711 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:05.712 Found net devices under 0000:09:00.0: cvl_0_0 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:05.712 Found net devices under 0000:09:00.1: cvl_0_1 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.712 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:28:05.969 00:28:05.969 --- 10.0.0.2 ping statistics --- 00:28:05.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.969 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:28:05.969 00:28:05.969 --- 10.0.0.1 ping statistics --- 00:28:05.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.969 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=879067 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 879067 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 879067 ']' 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.969 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:05.969 [2024-07-25 00:08:11.532349] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.969 [2024-07-25 00:08:11.532432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.969 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.969 [2024-07-25 00:08:11.598828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.227 [2024-07-25 00:08:11.690485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.227 [2024-07-25 00:08:11.690538] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.227 [2024-07-25 00:08:11.690563] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.227 [2024-07-25 00:08:11.690575] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.227 [2024-07-25 00:08:11.690587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.227 [2024-07-25 00:08:11.690618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 [2024-07-25 00:08:11.834655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 null0 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0819c973909249e296203e01cbd7f262 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.227 [2024-07-25 00:08:11.874919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.227 00:08:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.484 nvme0n1 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.484 [ 00:28:06.484 { 00:28:06.484 "name": "nvme0n1", 00:28:06.484 "aliases": [ 00:28:06.484 "0819c973-9092-49e2-9620-3e01cbd7f262" 00:28:06.484 ], 00:28:06.484 "product_name": "NVMe disk", 00:28:06.484 "block_size": 512, 00:28:06.484 "num_blocks": 2097152, 00:28:06.484 "uuid": "0819c973-9092-49e2-9620-3e01cbd7f262", 00:28:06.484 "assigned_rate_limits": { 00:28:06.484 "rw_ios_per_sec": 0, 00:28:06.484 "rw_mbytes_per_sec": 0, 00:28:06.484 "r_mbytes_per_sec": 0, 00:28:06.484 "w_mbytes_per_sec": 0 00:28:06.484 }, 00:28:06.484 "claimed": false, 00:28:06.484 "zoned": false, 00:28:06.484 "supported_io_types": { 00:28:06.484 "read": true, 00:28:06.484 "write": true, 00:28:06.484 "unmap": false, 00:28:06.484 "write_zeroes": true, 00:28:06.484 "flush": true, 00:28:06.484 "reset": true, 00:28:06.484 "compare": true, 00:28:06.484 "compare_and_write": true, 00:28:06.484 "abort": true, 00:28:06.484 "nvme_admin": true, 00:28:06.484 "nvme_io": true 00:28:06.484 }, 00:28:06.484 "memory_domains": [ 00:28:06.484 { 00:28:06.484 "dma_device_id": "system", 00:28:06.484 "dma_device_type": 1 00:28:06.484 } 00:28:06.484 ], 00:28:06.484 "driver_specific": { 00:28:06.484 "nvme": [ 00:28:06.484 { 00:28:06.484 "trid": { 00:28:06.484 "trtype": "TCP", 00:28:06.484 "adrfam": "IPv4", 00:28:06.484 "traddr": "10.0.0.2", 00:28:06.484 "trsvcid": "4420", 00:28:06.484 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:06.484 }, 00:28:06.484 "ctrlr_data": { 00:28:06.484 "cntlid": 1, 00:28:06.484 "vendor_id": "0x8086", 00:28:06.484 "model_number": "SPDK bdev Controller", 00:28:06.484 "serial_number": "00000000000000000000", 00:28:06.484 "firmware_revision": "24.05.1", 00:28:06.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.484 "oacs": { 00:28:06.484 "security": 0, 00:28:06.484 "format": 0, 00:28:06.484 "firmware": 0, 00:28:06.484 "ns_manage": 0 00:28:06.484 }, 00:28:06.484 "multi_ctrlr": true, 00:28:06.484 "ana_reporting": false 00:28:06.484 }, 00:28:06.484 "vs": { 00:28:06.484 "nvme_version": "1.3" 00:28:06.484 }, 00:28:06.484 "ns_data": { 00:28:06.484 "id": 1, 00:28:06.484 "can_share": true 00:28:06.484 } 00:28:06.484 } 00:28:06.484 ], 00:28:06.484 "mp_policy": "active_passive" 00:28:06.484 } 00:28:06.484 } 00:28:06.484 ] 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.484 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.484 [2024-07-25 00:08:12.131468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.484 [2024-07-25 00:08:12.131569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812b90 (9): Bad file descriptor 00:28:06.742 [2024-07-25 00:08:12.263251] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 [ 00:28:06.742 { 00:28:06.742 "name": "nvme0n1", 00:28:06.742 "aliases": [ 00:28:06.742 "0819c973-9092-49e2-9620-3e01cbd7f262" 00:28:06.742 ], 00:28:06.742 "product_name": "NVMe disk", 00:28:06.742 "block_size": 512, 00:28:06.742 "num_blocks": 2097152, 00:28:06.742 "uuid": "0819c973-9092-49e2-9620-3e01cbd7f262", 00:28:06.742 "assigned_rate_limits": { 00:28:06.742 "rw_ios_per_sec": 0, 00:28:06.742 "rw_mbytes_per_sec": 0, 00:28:06.742 "r_mbytes_per_sec": 0, 00:28:06.742 "w_mbytes_per_sec": 0 00:28:06.742 }, 00:28:06.742 "claimed": false, 00:28:06.742 "zoned": false, 00:28:06.742 "supported_io_types": { 00:28:06.742 "read": true, 00:28:06.742 "write": true, 00:28:06.742 "unmap": false, 00:28:06.742 "write_zeroes": true, 00:28:06.742 "flush": true, 00:28:06.742 "reset": true, 00:28:06.742 "compare": true, 00:28:06.742 "compare_and_write": true, 00:28:06.742 "abort": true, 00:28:06.742 "nvme_admin": true, 00:28:06.742 "nvme_io": true 00:28:06.742 }, 00:28:06.742 "memory_domains": [ 00:28:06.742 { 00:28:06.742 "dma_device_id": "system", 00:28:06.742 "dma_device_type": 1 00:28:06.742 } 00:28:06.742 ], 00:28:06.742 "driver_specific": { 00:28:06.742 "nvme": [ 00:28:06.742 { 00:28:06.742 "trid": { 00:28:06.742 "trtype": "TCP", 00:28:06.742 "adrfam": "IPv4", 00:28:06.742 "traddr": "10.0.0.2", 00:28:06.742 "trsvcid": "4420", 00:28:06.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:06.742 }, 00:28:06.742 "ctrlr_data": { 00:28:06.742 "cntlid": 2, 00:28:06.742 "vendor_id": "0x8086", 00:28:06.742 "model_number": "SPDK bdev Controller", 00:28:06.742 "serial_number": "00000000000000000000", 00:28:06.742 "firmware_revision": "24.05.1", 00:28:06.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.742 "oacs": { 00:28:06.742 "security": 0, 00:28:06.742 "format": 0, 00:28:06.742 "firmware": 0, 00:28:06.742 "ns_manage": 0 00:28:06.742 }, 00:28:06.742 "multi_ctrlr": true, 00:28:06.742 "ana_reporting": false 00:28:06.742 }, 00:28:06.742 "vs": { 00:28:06.742 "nvme_version": "1.3" 00:28:06.742 }, 00:28:06.742 "ns_data": { 00:28:06.742 "id": 1, 00:28:06.742 "can_share": true 00:28:06.742 } 00:28:06.742 } 00:28:06.742 ], 00:28:06.742 "mp_policy": "active_passive" 00:28:06.742 } 00:28:06.742 } 00:28:06.742 ] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5tPviUzky2 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5tPviUzky2 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 [2024-07-25 00:08:12.316088] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:06.742 [2024-07-25 00:08:12.316231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5tPviUzky2 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 [2024-07-25 00:08:12.324115] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5tPviUzky2 00:28:06.742 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.743 [2024-07-25 00:08:12.332125] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:06.743 [2024-07-25 00:08:12.332193] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:06.743 nvme0n1 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.743 [ 00:28:06.743 { 00:28:06.743 "name": "nvme0n1", 00:28:06.743 "aliases": [ 00:28:06.743 "0819c973-9092-49e2-9620-3e01cbd7f262" 00:28:06.743 ], 00:28:06.743 "product_name": "NVMe disk", 00:28:06.743 "block_size": 512, 00:28:06.743 "num_blocks": 2097152, 00:28:06.743 "uuid": "0819c973-9092-49e2-9620-3e01cbd7f262", 00:28:06.743 "assigned_rate_limits": { 00:28:06.743 "rw_ios_per_sec": 0, 00:28:06.743 "rw_mbytes_per_sec": 0, 00:28:06.743 "r_mbytes_per_sec": 0, 00:28:06.743 "w_mbytes_per_sec": 0 00:28:06.743 }, 00:28:06.743 "claimed": false, 00:28:06.743 "zoned": false, 00:28:06.743 "supported_io_types": { 00:28:06.743 "read": true, 00:28:06.743 "write": true, 00:28:06.743 "unmap": false, 00:28:06.743 "write_zeroes": true, 00:28:06.743 "flush": true, 00:28:06.743 "reset": true, 00:28:06.743 "compare": true, 00:28:06.743 "compare_and_write": true, 00:28:06.743 "abort": true, 00:28:06.743 "nvme_admin": true, 00:28:06.743 "nvme_io": true 00:28:06.743 }, 00:28:06.743 "memory_domains": [ 00:28:06.743 { 00:28:06.743 "dma_device_id": "system", 00:28:06.743 "dma_device_type": 1 00:28:06.743 } 00:28:06.743 ], 00:28:06.743 "driver_specific": { 00:28:06.743 "nvme": [ 00:28:06.743 { 00:28:06.743 "trid": { 00:28:06.743 "trtype": "TCP", 00:28:06.743 "adrfam": "IPv4", 00:28:06.743 "traddr": "10.0.0.2", 00:28:06.743 "trsvcid": "4421", 00:28:06.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:06.743 }, 00:28:06.743 "ctrlr_data": { 00:28:06.743 "cntlid": 3, 00:28:06.743 "vendor_id": "0x8086", 00:28:06.743 "model_number": "SPDK bdev Controller", 00:28:06.743 "serial_number": "00000000000000000000", 00:28:06.743 "firmware_revision": "24.05.1", 00:28:06.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.743 "oacs": { 00:28:06.743 "security": 0, 00:28:06.743 "format": 0, 00:28:06.743 "firmware": 0, 00:28:06.743 "ns_manage": 0 00:28:06.743 }, 00:28:06.743 "multi_ctrlr": true, 00:28:06.743 "ana_reporting": false 00:28:06.743 }, 00:28:06.743 "vs": { 00:28:06.743 "nvme_version": "1.3" 00:28:06.743 }, 00:28:06.743 "ns_data": { 00:28:06.743 "id": 1, 00:28:06.743 "can_share": true 00:28:06.743 } 00:28:06.743 } 00:28:06.743 ], 00:28:06.743 "mp_policy": "active_passive" 00:28:06.743 } 00:28:06.743 } 00:28:06.743 ] 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5tPviUzky2 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.743 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.743 rmmod nvme_tcp 00:28:06.743 rmmod nvme_fabrics 00:28:07.002 rmmod nvme_keyring 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 879067 ']' 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 879067 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 879067 ']' 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 879067 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 879067 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 879067' 00:28:07.002 killing process with pid 879067 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 879067 00:28:07.002 [2024-07-25 00:08:12.517218] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:07.002 [2024-07-25 00:08:12.517253] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:07.002 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 879067 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.260 00:08:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.159 00:08:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.159 00:28:09.159 real 0m5.510s 00:28:09.159 user 0m2.041s 00:28:09.159 sys 0m1.862s 00:28:09.159 00:08:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.159 00:08:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.159 ************************************ 00:28:09.159 END TEST nvmf_async_init 00:28:09.159 ************************************ 00:28:09.159 00:08:14 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:09.159 00:08:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:09.159 00:08:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.159 00:08:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.159 ************************************ 00:28:09.159 START TEST dma 00:28:09.159 ************************************ 00:28:09.159 00:08:14 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:09.418 * Looking for test storage... 00:28:09.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.418 00:08:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.418 00:08:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.418 00:08:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.418 00:08:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.418 00:08:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.418 00:08:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.418 00:08:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.418 00:08:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:09.418 00:08:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.418 00:08:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.418 00:08:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:09.418 00:08:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:09.418 00:28:09.418 real 0m0.067s 00:28:09.418 user 0m0.030s 00:28:09.418 sys 0m0.042s 00:28:09.418 00:08:14 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.418 00:08:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:09.418 ************************************ 00:28:09.418 END TEST dma 00:28:09.418 ************************************ 00:28:09.418 00:08:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:09.418 00:08:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:09.418 00:08:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.418 00:08:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.418 ************************************ 00:28:09.418 START TEST nvmf_identify 00:28:09.418 ************************************ 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:09.419 * Looking for test storage... 00:28:09.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.419 00:08:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.419 00:08:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:11.317 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:11.317 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.317 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:11.318 Found net devices under 0000:09:00.0: cvl_0_0 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:11.318 Found net devices under 0000:09:00.1: cvl_0_1 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.318 00:08:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.318 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.318 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.318 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.318 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:28:11.576 00:28:11.576 --- 10.0.0.2 ping statistics --- 00:28:11.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.576 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:28:11.576 00:28:11.576 --- 10.0.0.1 ping statistics --- 00:28:11.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.576 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=881181 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 881181 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 881181 ']' 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:11.576 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.576 [2024-07-25 00:08:17.146422] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:11.576 [2024-07-25 00:08:17.146494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.576 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.576 [2024-07-25 00:08:17.217299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.834 [2024-07-25 00:08:17.309997] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.834 [2024-07-25 00:08:17.310042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.834 [2024-07-25 00:08:17.310070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.834 [2024-07-25 00:08:17.310081] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.834 [2024-07-25 00:08:17.310099] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.834 [2024-07-25 00:08:17.310176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.834 [2024-07-25 00:08:17.310234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.834 [2024-07-25 00:08:17.310311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.834 [2024-07-25 00:08:17.310314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 [2024-07-25 00:08:17.446937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 Malloc0 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 [2024-07-25 00:08:17.523538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 [ 00:28:11.834 { 00:28:11.834 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:11.834 "subtype": "Discovery", 00:28:11.834 "listen_addresses": [ 00:28:11.834 { 00:28:11.834 "trtype": "TCP", 00:28:11.834 "adrfam": "IPv4", 00:28:11.834 "traddr": "10.0.0.2", 00:28:11.834 "trsvcid": "4420" 00:28:11.834 } 00:28:11.834 ], 00:28:11.834 "allow_any_host": true, 00:28:11.834 "hosts": [] 00:28:11.834 }, 00:28:11.834 { 00:28:11.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.834 "subtype": "NVMe", 00:28:11.834 "listen_addresses": [ 00:28:11.834 { 00:28:11.834 "trtype": "TCP", 00:28:11.834 "adrfam": "IPv4", 00:28:11.834 "traddr": "10.0.0.2", 00:28:11.834 "trsvcid": "4420" 00:28:11.834 } 00:28:11.834 ], 00:28:11.834 "allow_any_host": true, 00:28:11.834 "hosts": [], 00:28:11.834 "serial_number": "SPDK00000000000001", 00:28:11.834 "model_number": "SPDK bdev Controller", 00:28:11.834 "max_namespaces": 32, 00:28:11.834 "min_cntlid": 1, 00:28:11.834 "max_cntlid": 65519, 00:28:11.834 "namespaces": [ 00:28:11.834 { 00:28:11.834 "nsid": 1, 00:28:11.834 "bdev_name": "Malloc0", 00:28:11.834 "name": "Malloc0", 00:28:11.834 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:11.834 "eui64": "ABCDEF0123456789", 00:28:11.834 "uuid": "d414a001-e1e5-4850-b062-41143b1d78ee" 00:28:11.834 } 00:28:11.834 ] 00:28:11.834 } 00:28:11.834 ] 00:28:11.834 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.835 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:12.095 [2024-07-25 00:08:17.565100] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:12.095 [2024-07-25 00:08:17.565163] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881259 ] 00:28:12.095 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.095 [2024-07-25 00:08:17.602603] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:12.096 [2024-07-25 00:08:17.602676] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:12.096 [2024-07-25 00:08:17.602686] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:12.096 [2024-07-25 00:08:17.602701] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:12.096 [2024-07-25 00:08:17.602715] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:12.096 [2024-07-25 00:08:17.602974] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:12.096 [2024-07-25 00:08:17.603029] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1245120 0 00:28:12.096 [2024-07-25 00:08:17.613124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:12.096 [2024-07-25 00:08:17.613147] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:12.096 [2024-07-25 00:08:17.613167] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:12.096 [2024-07-25 00:08:17.613173] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:12.096 [2024-07-25 00:08:17.613226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.613239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.613247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.613267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:12.096 [2024-07-25 00:08:17.613294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.621120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.621139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.621146] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.621173] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:12.096 [2024-07-25 00:08:17.621184] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:12.096 [2024-07-25 00:08:17.621194] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:12.096 [2024-07-25 00:08:17.621220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.621247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.621272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.621415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.621428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.621434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.621456] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:12.096 [2024-07-25 00:08:17.621471] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:12.096 [2024-07-25 00:08:17.621483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621491] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621501] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.621512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.621533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.621678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.621693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.621700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621707] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.621718] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:12.096 [2024-07-25 00:08:17.621732] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.621745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.621769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.621790] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.621910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.621922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.621928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621935] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.621946] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.621962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.621977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.621988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.622008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.622130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.622143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.622150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.622167] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:12.096 [2024-07-25 00:08:17.622175] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.622189] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.622299] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:12.096 [2024-07-25 00:08:17.622308] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.622328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.622353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.622375] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.622511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.622527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.622534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.622551] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:12.096 [2024-07-25 00:08:17.622567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622577] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.622594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.096 [2024-07-25 00:08:17.622615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.096 [2024-07-25 00:08:17.622734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.096 [2024-07-25 00:08:17.622746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.096 [2024-07-25 00:08:17.622753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622760] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.096 [2024-07-25 00:08:17.622769] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:12.096 [2024-07-25 00:08:17.622778] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:12.096 [2024-07-25 00:08:17.622791] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:12.096 [2024-07-25 00:08:17.622805] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:12.096 [2024-07-25 00:08:17.622824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.096 [2024-07-25 00:08:17.622832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.096 [2024-07-25 00:08:17.622843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.097 [2024-07-25 00:08:17.622863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.097 [2024-07-25 00:08:17.623031] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.097 [2024-07-25 00:08:17.623047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.097 [2024-07-25 00:08:17.623053] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623060] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1245120): datao=0, datal=4096, cccid=0 00:28:12.097 [2024-07-25 00:08:17.623068] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x129e1f0) on tqpair(0x1245120): expected_datao=0, payload_size=4096 00:28:12.097 [2024-07-25 00:08:17.623081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623093] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623108] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.097 [2024-07-25 00:08:17.623133] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.097 [2024-07-25 00:08:17.623140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.097 [2024-07-25 00:08:17.623165] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:12.097 [2024-07-25 00:08:17.623175] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:12.097 [2024-07-25 00:08:17.623183] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:12.097 [2024-07-25 00:08:17.623192] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:12.097 [2024-07-25 00:08:17.623200] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:12.097 [2024-07-25 00:08:17.623208] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:12.097 [2024-07-25 00:08:17.623224] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:12.097 [2024-07-25 00:08:17.623237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623251] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.097 [2024-07-25 00:08:17.623283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.097 [2024-07-25 00:08:17.623421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.097 [2024-07-25 00:08:17.623433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.097 [2024-07-25 00:08:17.623440] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e1f0) on tqpair=0x1245120 00:28:12.097 [2024-07-25 00:08:17.623462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.097 [2024-07-25 00:08:17.623496] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623509] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.097 [2024-07-25 00:08:17.623527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.097 [2024-07-25 00:08:17.623563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.097 [2024-07-25 00:08:17.623594] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:12.097 [2024-07-25 00:08:17.623613] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:12.097 [2024-07-25 00:08:17.623626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.097 [2024-07-25 00:08:17.623665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e1f0, cid 0, qid 0 00:28:12.097 [2024-07-25 00:08:17.623676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e350, cid 1, qid 0 00:28:12.097 [2024-07-25 00:08:17.623684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e4b0, cid 2, qid 0 00:28:12.097 [2024-07-25 00:08:17.623691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.097 [2024-07-25 00:08:17.623699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e770, cid 4, qid 0 00:28:12.097 [2024-07-25 00:08:17.623854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.097 [2024-07-25 00:08:17.623869] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.097 [2024-07-25 00:08:17.623876] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623883] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e770) on tqpair=0x1245120 00:28:12.097 [2024-07-25 00:08:17.623894] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:12.097 [2024-07-25 00:08:17.623903] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:12.097 [2024-07-25 00:08:17.623921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.623930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.623940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.097 [2024-07-25 00:08:17.623961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e770, cid 4, qid 0 00:28:12.097 [2024-07-25 00:08:17.624094] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.097 [2024-07-25 00:08:17.624118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.097 [2024-07-25 00:08:17.624126] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624133] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1245120): datao=0, datal=4096, cccid=4 00:28:12.097 [2024-07-25 00:08:17.624140] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x129e770) on tqpair(0x1245120): expected_datao=0, payload_size=4096 00:28:12.097 [2024-07-25 00:08:17.624148] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624170] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.097 [2024-07-25 00:08:17.624256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.097 [2024-07-25 00:08:17.624262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e770) on tqpair=0x1245120 00:28:12.097 [2024-07-25 00:08:17.624289] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:12.097 [2024-07-25 00:08:17.624328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.624350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.097 [2024-07-25 00:08:17.624362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1245120) 00:28:12.097 [2024-07-25 00:08:17.624385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.097 [2024-07-25 00:08:17.624412] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e770, cid 4, qid 0 00:28:12.097 [2024-07-25 00:08:17.624423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e8d0, cid 5, qid 0 00:28:12.097 [2024-07-25 00:08:17.624592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.097 [2024-07-25 00:08:17.624606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.097 [2024-07-25 00:08:17.624613] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624619] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1245120): datao=0, datal=1024, cccid=4 00:28:12.097 [2024-07-25 00:08:17.624627] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x129e770) on tqpair(0x1245120): expected_datao=0, payload_size=1024 00:28:12.097 [2024-07-25 00:08:17.624635] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624645] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624652] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.097 [2024-07-25 00:08:17.624660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.098 [2024-07-25 00:08:17.624669] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.098 [2024-07-25 00:08:17.624676] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.624682] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e8d0) on tqpair=0x1245120 00:28:12.098 [2024-07-25 00:08:17.669116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.098 [2024-07-25 00:08:17.669146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.098 [2024-07-25 00:08:17.669153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e770) on tqpair=0x1245120 00:28:12.098 [2024-07-25 00:08:17.669185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1245120) 00:28:12.098 [2024-07-25 00:08:17.669207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.098 [2024-07-25 00:08:17.669238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e770, cid 4, qid 0 00:28:12.098 [2024-07-25 00:08:17.669406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.098 [2024-07-25 00:08:17.669422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.098 [2024-07-25 00:08:17.669435] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669443] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1245120): datao=0, datal=3072, cccid=4 00:28:12.098 [2024-07-25 00:08:17.669451] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x129e770) on tqpair(0x1245120): expected_datao=0, payload_size=3072 00:28:12.098 [2024-07-25 00:08:17.669458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669469] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669476] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669548] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.098 [2024-07-25 00:08:17.669561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.098 [2024-07-25 00:08:17.669567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669574] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e770) on tqpair=0x1245120 00:28:12.098 [2024-07-25 00:08:17.669590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1245120) 00:28:12.098 [2024-07-25 00:08:17.669609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.098 [2024-07-25 00:08:17.669637] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e770, cid 4, qid 0 00:28:12.098 [2024-07-25 00:08:17.669782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.098 [2024-07-25 00:08:17.669794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.098 [2024-07-25 00:08:17.669801] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669807] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1245120): datao=0, datal=8, cccid=4 00:28:12.098 [2024-07-25 00:08:17.669815] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x129e770) on tqpair(0x1245120): expected_datao=0, payload_size=8 00:28:12.098 [2024-07-25 00:08:17.669822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669832] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.669839] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.711251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.098 [2024-07-25 00:08:17.711271] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.098 [2024-07-25 00:08:17.711278] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.098 [2024-07-25 00:08:17.711285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e770) on tqpair=0x1245120 00:28:12.098 ===================================================== 00:28:12.098 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:12.098 ===================================================== 00:28:12.098 Controller Capabilities/Features 00:28:12.098 ================================ 00:28:12.098 Vendor ID: 0000 00:28:12.098 Subsystem Vendor ID: 0000 00:28:12.098 Serial Number: .................... 00:28:12.098 Model Number: ........................................ 00:28:12.098 Firmware Version: 24.05.1 00:28:12.098 Recommended Arb Burst: 0 00:28:12.098 IEEE OUI Identifier: 00 00 00 00:28:12.098 Multi-path I/O 00:28:12.098 May have multiple subsystem ports: No 00:28:12.098 May have multiple controllers: No 00:28:12.098 Associated with SR-IOV VF: No 00:28:12.098 Max Data Transfer Size: 131072 00:28:12.098 Max Number of Namespaces: 0 00:28:12.098 Max Number of I/O Queues: 1024 00:28:12.098 NVMe Specification Version (VS): 1.3 00:28:12.098 NVMe Specification Version (Identify): 1.3 00:28:12.098 Maximum Queue Entries: 128 00:28:12.098 Contiguous Queues Required: Yes 00:28:12.098 Arbitration Mechanisms Supported 00:28:12.098 Weighted Round Robin: Not Supported 00:28:12.098 Vendor Specific: Not Supported 00:28:12.098 Reset Timeout: 15000 ms 00:28:12.098 Doorbell Stride: 4 bytes 00:28:12.098 NVM Subsystem Reset: Not Supported 00:28:12.098 Command Sets Supported 00:28:12.098 NVM Command Set: Supported 00:28:12.098 Boot Partition: Not Supported 00:28:12.098 Memory Page Size Minimum: 4096 bytes 00:28:12.098 Memory Page Size Maximum: 4096 bytes 00:28:12.098 Persistent Memory Region: Not Supported 00:28:12.098 Optional Asynchronous Events Supported 00:28:12.098 Namespace Attribute Notices: Not Supported 00:28:12.098 Firmware Activation Notices: Not Supported 00:28:12.098 ANA Change Notices: Not Supported 00:28:12.098 PLE Aggregate Log Change Notices: Not Supported 00:28:12.098 LBA Status Info Alert Notices: Not Supported 00:28:12.098 EGE Aggregate Log Change Notices: Not Supported 00:28:12.098 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.098 Zone Descriptor Change Notices: Not Supported 00:28:12.098 Discovery Log Change Notices: Supported 00:28:12.098 Controller Attributes 00:28:12.098 128-bit Host Identifier: Not Supported 00:28:12.098 Non-Operational Permissive Mode: Not Supported 00:28:12.098 NVM Sets: Not Supported 00:28:12.098 Read Recovery Levels: Not Supported 00:28:12.098 Endurance Groups: Not Supported 00:28:12.098 Predictable Latency Mode: Not Supported 00:28:12.098 Traffic Based Keep ALive: Not Supported 00:28:12.098 Namespace Granularity: Not Supported 00:28:12.098 SQ Associations: Not Supported 00:28:12.098 UUID List: Not Supported 00:28:12.098 Multi-Domain Subsystem: Not Supported 00:28:12.098 Fixed Capacity Management: Not Supported 00:28:12.098 Variable Capacity Management: Not Supported 00:28:12.098 Delete Endurance Group: Not Supported 00:28:12.098 Delete NVM Set: Not Supported 00:28:12.098 Extended LBA Formats Supported: Not Supported 00:28:12.098 Flexible Data Placement Supported: Not Supported 00:28:12.098 00:28:12.098 Controller Memory Buffer Support 00:28:12.098 ================================ 00:28:12.098 Supported: No 00:28:12.098 00:28:12.098 Persistent Memory Region Support 00:28:12.098 ================================ 00:28:12.098 Supported: No 00:28:12.098 00:28:12.098 Admin Command Set Attributes 00:28:12.098 ============================ 00:28:12.098 Security Send/Receive: Not Supported 00:28:12.098 Format NVM: Not Supported 00:28:12.098 Firmware Activate/Download: Not Supported 00:28:12.098 Namespace Management: Not Supported 00:28:12.098 Device Self-Test: Not Supported 00:28:12.098 Directives: Not Supported 00:28:12.098 NVMe-MI: Not Supported 00:28:12.098 Virtualization Management: Not Supported 00:28:12.098 Doorbell Buffer Config: Not Supported 00:28:12.098 Get LBA Status Capability: Not Supported 00:28:12.098 Command & Feature Lockdown Capability: Not Supported 00:28:12.098 Abort Command Limit: 1 00:28:12.098 Async Event Request Limit: 4 00:28:12.098 Number of Firmware Slots: N/A 00:28:12.098 Firmware Slot 1 Read-Only: N/A 00:28:12.098 Firmware Activation Without Reset: N/A 00:28:12.098 Multiple Update Detection Support: N/A 00:28:12.098 Firmware Update Granularity: No Information Provided 00:28:12.098 Per-Namespace SMART Log: No 00:28:12.098 Asymmetric Namespace Access Log Page: Not Supported 00:28:12.098 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:12.098 Command Effects Log Page: Not Supported 00:28:12.098 Get Log Page Extended Data: Supported 00:28:12.098 Telemetry Log Pages: Not Supported 00:28:12.098 Persistent Event Log Pages: Not Supported 00:28:12.098 Supported Log Pages Log Page: May Support 00:28:12.098 Commands Supported & Effects Log Page: Not Supported 00:28:12.098 Feature Identifiers & Effects Log Page:May Support 00:28:12.098 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.098 Data Area 4 for Telemetry Log: Not Supported 00:28:12.098 Error Log Page Entries Supported: 128 00:28:12.098 Keep Alive: Not Supported 00:28:12.098 00:28:12.098 NVM Command Set Attributes 00:28:12.098 ========================== 00:28:12.098 Submission Queue Entry Size 00:28:12.098 Max: 1 00:28:12.098 Min: 1 00:28:12.098 Completion Queue Entry Size 00:28:12.099 Max: 1 00:28:12.099 Min: 1 00:28:12.099 Number of Namespaces: 0 00:28:12.099 Compare Command: Not Supported 00:28:12.099 Write Uncorrectable Command: Not Supported 00:28:12.099 Dataset Management Command: Not Supported 00:28:12.099 Write Zeroes Command: Not Supported 00:28:12.099 Set Features Save Field: Not Supported 00:28:12.099 Reservations: Not Supported 00:28:12.099 Timestamp: Not Supported 00:28:12.099 Copy: Not Supported 00:28:12.099 Volatile Write Cache: Not Present 00:28:12.099 Atomic Write Unit (Normal): 1 00:28:12.099 Atomic Write Unit (PFail): 1 00:28:12.099 Atomic Compare & Write Unit: 1 00:28:12.099 Fused Compare & Write: Supported 00:28:12.099 Scatter-Gather List 00:28:12.099 SGL Command Set: Supported 00:28:12.099 SGL Keyed: Supported 00:28:12.099 SGL Bit Bucket Descriptor: Not Supported 00:28:12.099 SGL Metadata Pointer: Not Supported 00:28:12.099 Oversized SGL: Not Supported 00:28:12.099 SGL Metadata Address: Not Supported 00:28:12.099 SGL Offset: Supported 00:28:12.099 Transport SGL Data Block: Not Supported 00:28:12.099 Replay Protected Memory Block: Not Supported 00:28:12.099 00:28:12.099 Firmware Slot Information 00:28:12.099 ========================= 00:28:12.099 Active slot: 0 00:28:12.099 00:28:12.099 00:28:12.099 Error Log 00:28:12.099 ========= 00:28:12.099 00:28:12.099 Active Namespaces 00:28:12.099 ================= 00:28:12.099 Discovery Log Page 00:28:12.099 ================== 00:28:12.099 Generation Counter: 2 00:28:12.099 Number of Records: 2 00:28:12.099 Record Format: 0 00:28:12.099 00:28:12.099 Discovery Log Entry 0 00:28:12.099 ---------------------- 00:28:12.099 Transport Type: 3 (TCP) 00:28:12.099 Address Family: 1 (IPv4) 00:28:12.099 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:12.099 Entry Flags: 00:28:12.099 Duplicate Returned Information: 1 00:28:12.099 Explicit Persistent Connection Support for Discovery: 1 00:28:12.099 Transport Requirements: 00:28:12.099 Secure Channel: Not Required 00:28:12.099 Port ID: 0 (0x0000) 00:28:12.099 Controller ID: 65535 (0xffff) 00:28:12.099 Admin Max SQ Size: 128 00:28:12.099 Transport Service Identifier: 4420 00:28:12.099 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:12.099 Transport Address: 10.0.0.2 00:28:12.099 Discovery Log Entry 1 00:28:12.099 ---------------------- 00:28:12.099 Transport Type: 3 (TCP) 00:28:12.099 Address Family: 1 (IPv4) 00:28:12.099 Subsystem Type: 2 (NVM Subsystem) 00:28:12.099 Entry Flags: 00:28:12.099 Duplicate Returned Information: 0 00:28:12.099 Explicit Persistent Connection Support for Discovery: 0 00:28:12.099 Transport Requirements: 00:28:12.099 Secure Channel: Not Required 00:28:12.099 Port ID: 0 (0x0000) 00:28:12.099 Controller ID: 65535 (0xffff) 00:28:12.099 Admin Max SQ Size: 128 00:28:12.099 Transport Service Identifier: 4420 00:28:12.099 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:12.099 Transport Address: 10.0.0.2 [2024-07-25 00:08:17.711409] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:12.099 [2024-07-25 00:08:17.711437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.099 [2024-07-25 00:08:17.711450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.099 [2024-07-25 00:08:17.711460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.099 [2024-07-25 00:08:17.711469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.099 [2024-07-25 00:08:17.711487] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711497] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.711514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.711543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.711669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.711684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.711691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.099 [2024-07-25 00:08:17.711711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711719] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.711736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.711763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.711908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.711920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.711927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.099 [2024-07-25 00:08:17.711944] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:12.099 [2024-07-25 00:08:17.711952] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:12.099 [2024-07-25 00:08:17.711968] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.711984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.711994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.712015] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.712190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.712206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.712213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.099 [2024-07-25 00:08:17.712239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712249] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.712266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.712287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.712410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.712422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.712428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.099 [2024-07-25 00:08:17.712452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712472] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.712483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.712504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.712625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.712637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.712644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.099 [2024-07-25 00:08:17.712667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712677] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.099 [2024-07-25 00:08:17.712683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.099 [2024-07-25 00:08:17.712693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.099 [2024-07-25 00:08:17.712714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.099 [2024-07-25 00:08:17.712835] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.099 [2024-07-25 00:08:17.712847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.099 [2024-07-25 00:08:17.712853] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.712860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.100 [2024-07-25 00:08:17.712877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.712887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.712893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.100 [2024-07-25 00:08:17.712904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.100 [2024-07-25 00:08:17.712924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.100 [2024-07-25 00:08:17.713045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.100 [2024-07-25 00:08:17.713057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.100 [2024-07-25 00:08:17.713063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.713070] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.100 [2024-07-25 00:08:17.713087] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.713096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.717115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1245120) 00:28:12.100 [2024-07-25 00:08:17.717132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.100 [2024-07-25 00:08:17.717155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x129e610, cid 3, qid 0 00:28:12.100 [2024-07-25 00:08:17.717331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.100 [2024-07-25 00:08:17.717343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.100 [2024-07-25 00:08:17.717350] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.717357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x129e610) on tqpair=0x1245120 00:28:12.100 [2024-07-25 00:08:17.717371] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:12.100 00:28:12.100 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:12.100 [2024-07-25 00:08:17.751163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:12.100 [2024-07-25 00:08:17.751207] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881333 ] 00:28:12.100 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.100 [2024-07-25 00:08:17.784472] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:12.100 [2024-07-25 00:08:17.784524] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:12.100 [2024-07-25 00:08:17.784534] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:12.100 [2024-07-25 00:08:17.784549] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:12.100 [2024-07-25 00:08:17.784562] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:12.100 [2024-07-25 00:08:17.788158] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:12.100 [2024-07-25 00:08:17.788201] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1476120 0 00:28:12.100 [2024-07-25 00:08:17.795115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:12.100 [2024-07-25 00:08:17.795137] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:12.100 [2024-07-25 00:08:17.795145] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:12.100 [2024-07-25 00:08:17.795151] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:12.100 [2024-07-25 00:08:17.795194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.795205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.795212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.100 [2024-07-25 00:08:17.795227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:12.100 [2024-07-25 00:08:17.795253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.100 [2024-07-25 00:08:17.802116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.100 [2024-07-25 00:08:17.802135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.100 [2024-07-25 00:08:17.802143] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802150] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.100 [2024-07-25 00:08:17.802169] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:12.100 [2024-07-25 00:08:17.802181] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:12.100 [2024-07-25 00:08:17.802190] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:12.100 [2024-07-25 00:08:17.802211] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.100 [2024-07-25 00:08:17.802239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.100 [2024-07-25 00:08:17.802263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.100 [2024-07-25 00:08:17.802450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.100 [2024-07-25 00:08:17.802466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.100 [2024-07-25 00:08:17.802472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802479] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.100 [2024-07-25 00:08:17.802493] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:12.100 [2024-07-25 00:08:17.802510] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:12.100 [2024-07-25 00:08:17.802523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.100 [2024-07-25 00:08:17.802548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.100 [2024-07-25 00:08:17.802585] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.100 [2024-07-25 00:08:17.802738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.100 [2024-07-25 00:08:17.802756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.100 [2024-07-25 00:08:17.802764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.100 [2024-07-25 00:08:17.802780] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:12.100 [2024-07-25 00:08:17.802794] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:12.100 [2024-07-25 00:08:17.802809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.100 [2024-07-25 00:08:17.802824] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.100 [2024-07-25 00:08:17.802835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.100 [2024-07-25 00:08:17.802856] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.100 [2024-07-25 00:08:17.803037] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.803052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.803059] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.803079] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:12.101 [2024-07-25 00:08:17.803096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.803132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.101 [2024-07-25 00:08:17.803154] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.101 [2024-07-25 00:08:17.803288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.803303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.803310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803320] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.803331] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:12.101 [2024-07-25 00:08:17.803340] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:12.101 [2024-07-25 00:08:17.803356] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:12.101 [2024-07-25 00:08:17.803467] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:12.101 [2024-07-25 00:08:17.803475] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:12.101 [2024-07-25 00:08:17.803490] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.803514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.101 [2024-07-25 00:08:17.803535] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.101 [2024-07-25 00:08:17.803728] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.803743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.803750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.803767] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:12.101 [2024-07-25 00:08:17.803786] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.803802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.803813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.101 [2024-07-25 00:08:17.803834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.101 [2024-07-25 00:08:17.803986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.804002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.804008] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.804028] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:12.101 [2024-07-25 00:08:17.804037] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.804050] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:12.101 [2024-07-25 00:08:17.804066] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.804084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.101 [2024-07-25 00:08:17.804136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.101 [2024-07-25 00:08:17.804323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.101 [2024-07-25 00:08:17.804338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.101 [2024-07-25 00:08:17.804345] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804410] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=4096, cccid=0 00:28:12.101 [2024-07-25 00:08:17.804420] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf1f0) on tqpair(0x1476120): expected_datao=0, payload_size=4096 00:28:12.101 [2024-07-25 00:08:17.804428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804439] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804447] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.804469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.804476] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.804500] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:12.101 [2024-07-25 00:08:17.804509] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:12.101 [2024-07-25 00:08:17.804517] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:12.101 [2024-07-25 00:08:17.804524] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:12.101 [2024-07-25 00:08:17.804532] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:12.101 [2024-07-25 00:08:17.804540] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.804558] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.804570] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804578] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804584] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.101 [2024-07-25 00:08:17.804617] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.101 [2024-07-25 00:08:17.804797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.101 [2024-07-25 00:08:17.804812] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.101 [2024-07-25 00:08:17.804818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804825] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf1f0) on tqpair=0x1476120 00:28:12.101 [2024-07-25 00:08:17.804838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804846] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804852] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.101 [2024-07-25 00:08:17.804872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804883] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.101 [2024-07-25 00:08:17.804908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804915] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.101 [2024-07-25 00:08:17.804940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.804953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.804961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.101 [2024-07-25 00:08:17.804970] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.804991] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:12.101 [2024-07-25 00:08:17.805004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.101 [2024-07-25 00:08:17.805012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.101 [2024-07-25 00:08:17.805022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.101 [2024-07-25 00:08:17.805045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf1f0, cid 0, qid 0 00:28:12.102 [2024-07-25 00:08:17.805056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf350, cid 1, qid 0 00:28:12.102 [2024-07-25 00:08:17.805064] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf4b0, cid 2, qid 0 00:28:12.102 [2024-07-25 00:08:17.805071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.102 [2024-07-25 00:08:17.805079] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.102 [2024-07-25 00:08:17.805257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.102 [2024-07-25 00:08:17.805272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.102 [2024-07-25 00:08:17.805279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.805286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.102 [2024-07-25 00:08:17.805297] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:12.102 [2024-07-25 00:08:17.805306] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:12.102 [2024-07-25 00:08:17.805323] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:12.102 [2024-07-25 00:08:17.805336] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:12.102 [2024-07-25 00:08:17.805346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.805354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.805360] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.102 [2024-07-25 00:08:17.805375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.102 [2024-07-25 00:08:17.805396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.102 [2024-07-25 00:08:17.805572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.102 [2024-07-25 00:08:17.805587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.102 [2024-07-25 00:08:17.805593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.805600] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.102 [2024-07-25 00:08:17.805672] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:12.102 [2024-07-25 00:08:17.805711] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:12.102 [2024-07-25 00:08:17.805727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.805734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.102 [2024-07-25 00:08:17.805745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.102 [2024-07-25 00:08:17.805766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.102 [2024-07-25 00:08:17.805923] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.102 [2024-07-25 00:08:17.805938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.102 [2024-07-25 00:08:17.805945] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.102 [2024-07-25 00:08:17.806016] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=4096, cccid=4 00:28:12.102 [2024-07-25 00:08:17.806025] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf770) on tqpair(0x1476120): expected_datao=0, payload_size=4096 00:28:12.102 [2024-07-25 00:08:17.806033] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.362 [2024-07-25 00:08:17.810115] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.362 [2024-07-25 00:08:17.810129] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.362 [2024-07-25 00:08:17.810143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.362 [2024-07-25 00:08:17.810153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.362 [2024-07-25 00:08:17.810160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.362 [2024-07-25 00:08:17.810166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.362 [2024-07-25 00:08:17.810185] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:12.362 [2024-07-25 00:08:17.810207] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.810228] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.810242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.810260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.810283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.363 [2024-07-25 00:08:17.810444] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.363 [2024-07-25 00:08:17.810520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.363 [2024-07-25 00:08:17.810529] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810539] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=4096, cccid=4 00:28:12.363 [2024-07-25 00:08:17.810605] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf770) on tqpair(0x1476120): expected_datao=0, payload_size=4096 00:28:12.363 [2024-07-25 00:08:17.810614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810632] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810641] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810653] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.810662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.810669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.810700] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.810722] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.810736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.810744] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.810755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.810791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.363 [2024-07-25 00:08:17.810942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.363 [2024-07-25 00:08:17.810958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.363 [2024-07-25 00:08:17.811029] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811036] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=4096, cccid=4 00:28:12.363 [2024-07-25 00:08:17.811043] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf770) on tqpair(0x1476120): expected_datao=0, payload_size=4096 00:28:12.363 [2024-07-25 00:08:17.811051] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811136] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811145] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811157] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.811166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.811173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811179] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.811196] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811231] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811244] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811263] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:12.363 [2024-07-25 00:08:17.811274] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:12.363 [2024-07-25 00:08:17.811283] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:12.363 [2024-07-25 00:08:17.811308] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.811329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.811340] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.811363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.363 [2024-07-25 00:08:17.811388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.363 [2024-07-25 00:08:17.811415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf8d0, cid 5, qid 0 00:28:12.363 [2024-07-25 00:08:17.811569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.811587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.811594] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.811613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.811622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.811628] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf8d0) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.811654] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.811674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.811696] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf8d0, cid 5, qid 0 00:28:12.363 [2024-07-25 00:08:17.811884] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.811899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.811906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf8d0) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.811940] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.811951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.811961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.811999] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf8d0, cid 5, qid 0 00:28:12.363 [2024-07-25 00:08:17.812151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.812168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.812174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.812181] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf8d0) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.812205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.812215] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.812226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.812250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf8d0, cid 5, qid 0 00:28:12.363 [2024-07-25 00:08:17.812376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.363 [2024-07-25 00:08:17.812390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.363 [2024-07-25 00:08:17.812397] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.812404] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf8d0) on tqpair=0x1476120 00:28:12.363 [2024-07-25 00:08:17.812427] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.812437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.812447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.812459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.363 [2024-07-25 00:08:17.812467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1476120) 00:28:12.363 [2024-07-25 00:08:17.812476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.363 [2024-07-25 00:08:17.812487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.812495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1476120) 00:28:12.364 [2024-07-25 00:08:17.812504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.364 [2024-07-25 00:08:17.812530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.812538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1476120) 00:28:12.364 [2024-07-25 00:08:17.812547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.364 [2024-07-25 00:08:17.812569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf8d0, cid 5, qid 0 00:28:12.364 [2024-07-25 00:08:17.812579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf770, cid 4, qid 0 00:28:12.364 [2024-07-25 00:08:17.812603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cfa30, cid 6, qid 0 00:28:12.364 [2024-07-25 00:08:17.812611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cfb90, cid 7, qid 0 00:28:12.364 [2024-07-25 00:08:17.812871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.364 [2024-07-25 00:08:17.812887] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.364 [2024-07-25 00:08:17.812894] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.812900] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=8192, cccid=5 00:28:12.364 [2024-07-25 00:08:17.812908] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf8d0) on tqpair(0x1476120): expected_datao=0, payload_size=8192 00:28:12.364 [2024-07-25 00:08:17.812976] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.812988] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.812995] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813004] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.364 [2024-07-25 00:08:17.813016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.364 [2024-07-25 00:08:17.813024] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813030] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=512, cccid=4 00:28:12.364 [2024-07-25 00:08:17.813037] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cf770) on tqpair(0x1476120): expected_datao=0, payload_size=512 00:28:12.364 [2024-07-25 00:08:17.813044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813054] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813061] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.364 [2024-07-25 00:08:17.813077] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.364 [2024-07-25 00:08:17.813084] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813090] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=512, cccid=6 00:28:12.364 [2024-07-25 00:08:17.813098] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cfa30) on tqpair(0x1476120): expected_datao=0, payload_size=512 00:28:12.364 [2024-07-25 00:08:17.813111] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813121] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813128] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.364 [2024-07-25 00:08:17.813145] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.364 [2024-07-25 00:08:17.813151] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813157] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1476120): datao=0, datal=4096, cccid=7 00:28:12.364 [2024-07-25 00:08:17.813165] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cfb90) on tqpair(0x1476120): expected_datao=0, payload_size=4096 00:28:12.364 [2024-07-25 00:08:17.813172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813181] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813189] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.364 [2024-07-25 00:08:17.813210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.364 [2024-07-25 00:08:17.813217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf8d0) on tqpair=0x1476120 00:28:12.364 [2024-07-25 00:08:17.813244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.364 [2024-07-25 00:08:17.813256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.364 [2024-07-25 00:08:17.813262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf770) on tqpair=0x1476120 00:28:12.364 [2024-07-25 00:08:17.813284] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.364 [2024-07-25 00:08:17.813294] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.364 [2024-07-25 00:08:17.813301] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cfa30) on tqpair=0x1476120 00:28:12.364 [2024-07-25 00:08:17.813322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.364 [2024-07-25 00:08:17.813333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.364 [2024-07-25 00:08:17.813339] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.364 [2024-07-25 00:08:17.813345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cfb90) on tqpair=0x1476120 00:28:12.364 ===================================================== 00:28:12.364 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.364 ===================================================== 00:28:12.364 Controller Capabilities/Features 00:28:12.364 ================================ 00:28:12.364 Vendor ID: 8086 00:28:12.364 Subsystem Vendor ID: 8086 00:28:12.364 Serial Number: SPDK00000000000001 00:28:12.364 Model Number: SPDK bdev Controller 00:28:12.364 Firmware Version: 24.05.1 00:28:12.364 Recommended Arb Burst: 6 00:28:12.364 IEEE OUI Identifier: e4 d2 5c 00:28:12.364 Multi-path I/O 00:28:12.364 May have multiple subsystem ports: Yes 00:28:12.364 May have multiple controllers: Yes 00:28:12.364 Associated with SR-IOV VF: No 00:28:12.364 Max Data Transfer Size: 131072 00:28:12.364 Max Number of Namespaces: 32 00:28:12.364 Max Number of I/O Queues: 127 00:28:12.364 NVMe Specification Version (VS): 1.3 00:28:12.364 NVMe Specification Version (Identify): 1.3 00:28:12.364 Maximum Queue Entries: 128 00:28:12.364 Contiguous Queues Required: Yes 00:28:12.364 Arbitration Mechanisms Supported 00:28:12.364 Weighted Round Robin: Not Supported 00:28:12.364 Vendor Specific: Not Supported 00:28:12.364 Reset Timeout: 15000 ms 00:28:12.364 Doorbell Stride: 4 bytes 00:28:12.364 NVM Subsystem Reset: Not Supported 00:28:12.364 Command Sets Supported 00:28:12.364 NVM Command Set: Supported 00:28:12.364 Boot Partition: Not Supported 00:28:12.364 Memory Page Size Minimum: 4096 bytes 00:28:12.364 Memory Page Size Maximum: 4096 bytes 00:28:12.364 Persistent Memory Region: Not Supported 00:28:12.364 Optional Asynchronous Events Supported 00:28:12.364 Namespace Attribute Notices: Supported 00:28:12.364 Firmware Activation Notices: Not Supported 00:28:12.364 ANA Change Notices: Not Supported 00:28:12.364 PLE Aggregate Log Change Notices: Not Supported 00:28:12.364 LBA Status Info Alert Notices: Not Supported 00:28:12.364 EGE Aggregate Log Change Notices: Not Supported 00:28:12.364 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.364 Zone Descriptor Change Notices: Not Supported 00:28:12.364 Discovery Log Change Notices: Not Supported 00:28:12.364 Controller Attributes 00:28:12.364 128-bit Host Identifier: Supported 00:28:12.364 Non-Operational Permissive Mode: Not Supported 00:28:12.364 NVM Sets: Not Supported 00:28:12.364 Read Recovery Levels: Not Supported 00:28:12.364 Endurance Groups: Not Supported 00:28:12.364 Predictable Latency Mode: Not Supported 00:28:12.364 Traffic Based Keep ALive: Not Supported 00:28:12.364 Namespace Granularity: Not Supported 00:28:12.364 SQ Associations: Not Supported 00:28:12.364 UUID List: Not Supported 00:28:12.364 Multi-Domain Subsystem: Not Supported 00:28:12.364 Fixed Capacity Management: Not Supported 00:28:12.364 Variable Capacity Management: Not Supported 00:28:12.365 Delete Endurance Group: Not Supported 00:28:12.365 Delete NVM Set: Not Supported 00:28:12.365 Extended LBA Formats Supported: Not Supported 00:28:12.365 Flexible Data Placement Supported: Not Supported 00:28:12.365 00:28:12.365 Controller Memory Buffer Support 00:28:12.365 ================================ 00:28:12.365 Supported: No 00:28:12.365 00:28:12.365 Persistent Memory Region Support 00:28:12.365 ================================ 00:28:12.365 Supported: No 00:28:12.365 00:28:12.365 Admin Command Set Attributes 00:28:12.365 ============================ 00:28:12.365 Security Send/Receive: Not Supported 00:28:12.365 Format NVM: Not Supported 00:28:12.365 Firmware Activate/Download: Not Supported 00:28:12.365 Namespace Management: Not Supported 00:28:12.365 Device Self-Test: Not Supported 00:28:12.365 Directives: Not Supported 00:28:12.365 NVMe-MI: Not Supported 00:28:12.365 Virtualization Management: Not Supported 00:28:12.365 Doorbell Buffer Config: Not Supported 00:28:12.365 Get LBA Status Capability: Not Supported 00:28:12.365 Command & Feature Lockdown Capability: Not Supported 00:28:12.365 Abort Command Limit: 4 00:28:12.365 Async Event Request Limit: 4 00:28:12.365 Number of Firmware Slots: N/A 00:28:12.365 Firmware Slot 1 Read-Only: N/A 00:28:12.365 Firmware Activation Without Reset: N/A 00:28:12.365 Multiple Update Detection Support: N/A 00:28:12.365 Firmware Update Granularity: No Information Provided 00:28:12.365 Per-Namespace SMART Log: No 00:28:12.365 Asymmetric Namespace Access Log Page: Not Supported 00:28:12.365 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:12.365 Command Effects Log Page: Supported 00:28:12.365 Get Log Page Extended Data: Supported 00:28:12.365 Telemetry Log Pages: Not Supported 00:28:12.365 Persistent Event Log Pages: Not Supported 00:28:12.365 Supported Log Pages Log Page: May Support 00:28:12.365 Commands Supported & Effects Log Page: Not Supported 00:28:12.365 Feature Identifiers & Effects Log Page:May Support 00:28:12.365 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.365 Data Area 4 for Telemetry Log: Not Supported 00:28:12.365 Error Log Page Entries Supported: 128 00:28:12.365 Keep Alive: Supported 00:28:12.365 Keep Alive Granularity: 10000 ms 00:28:12.365 00:28:12.365 NVM Command Set Attributes 00:28:12.365 ========================== 00:28:12.365 Submission Queue Entry Size 00:28:12.365 Max: 64 00:28:12.365 Min: 64 00:28:12.365 Completion Queue Entry Size 00:28:12.365 Max: 16 00:28:12.365 Min: 16 00:28:12.365 Number of Namespaces: 32 00:28:12.365 Compare Command: Supported 00:28:12.365 Write Uncorrectable Command: Not Supported 00:28:12.365 Dataset Management Command: Supported 00:28:12.365 Write Zeroes Command: Supported 00:28:12.365 Set Features Save Field: Not Supported 00:28:12.365 Reservations: Supported 00:28:12.365 Timestamp: Not Supported 00:28:12.365 Copy: Supported 00:28:12.365 Volatile Write Cache: Present 00:28:12.365 Atomic Write Unit (Normal): 1 00:28:12.365 Atomic Write Unit (PFail): 1 00:28:12.365 Atomic Compare & Write Unit: 1 00:28:12.365 Fused Compare & Write: Supported 00:28:12.365 Scatter-Gather List 00:28:12.365 SGL Command Set: Supported 00:28:12.365 SGL Keyed: Supported 00:28:12.365 SGL Bit Bucket Descriptor: Not Supported 00:28:12.365 SGL Metadata Pointer: Not Supported 00:28:12.365 Oversized SGL: Not Supported 00:28:12.365 SGL Metadata Address: Not Supported 00:28:12.365 SGL Offset: Supported 00:28:12.365 Transport SGL Data Block: Not Supported 00:28:12.365 Replay Protected Memory Block: Not Supported 00:28:12.365 00:28:12.365 Firmware Slot Information 00:28:12.365 ========================= 00:28:12.365 Active slot: 1 00:28:12.365 Slot 1 Firmware Revision: 24.05.1 00:28:12.365 00:28:12.365 00:28:12.365 Commands Supported and Effects 00:28:12.365 ============================== 00:28:12.365 Admin Commands 00:28:12.365 -------------- 00:28:12.365 Get Log Page (02h): Supported 00:28:12.365 Identify (06h): Supported 00:28:12.365 Abort (08h): Supported 00:28:12.365 Set Features (09h): Supported 00:28:12.365 Get Features (0Ah): Supported 00:28:12.365 Asynchronous Event Request (0Ch): Supported 00:28:12.365 Keep Alive (18h): Supported 00:28:12.365 I/O Commands 00:28:12.365 ------------ 00:28:12.365 Flush (00h): Supported LBA-Change 00:28:12.365 Write (01h): Supported LBA-Change 00:28:12.365 Read (02h): Supported 00:28:12.365 Compare (05h): Supported 00:28:12.365 Write Zeroes (08h): Supported LBA-Change 00:28:12.365 Dataset Management (09h): Supported LBA-Change 00:28:12.365 Copy (19h): Supported LBA-Change 00:28:12.365 Unknown (79h): Supported LBA-Change 00:28:12.365 Unknown (7Ah): Supported 00:28:12.365 00:28:12.365 Error Log 00:28:12.365 ========= 00:28:12.365 00:28:12.365 Arbitration 00:28:12.365 =========== 00:28:12.365 Arbitration Burst: 1 00:28:12.365 00:28:12.365 Power Management 00:28:12.365 ================ 00:28:12.365 Number of Power States: 1 00:28:12.365 Current Power State: Power State #0 00:28:12.365 Power State #0: 00:28:12.365 Max Power: 0.00 W 00:28:12.365 Non-Operational State: Operational 00:28:12.365 Entry Latency: Not Reported 00:28:12.365 Exit Latency: Not Reported 00:28:12.365 Relative Read Throughput: 0 00:28:12.365 Relative Read Latency: 0 00:28:12.365 Relative Write Throughput: 0 00:28:12.365 Relative Write Latency: 0 00:28:12.365 Idle Power: Not Reported 00:28:12.365 Active Power: Not Reported 00:28:12.365 Non-Operational Permissive Mode: Not Supported 00:28:12.365 00:28:12.365 Health Information 00:28:12.365 ================== 00:28:12.365 Critical Warnings: 00:28:12.365 Available Spare Space: OK 00:28:12.365 Temperature: OK 00:28:12.365 Device Reliability: OK 00:28:12.365 Read Only: No 00:28:12.365 Volatile Memory Backup: OK 00:28:12.365 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:12.365 Temperature Threshold: [2024-07-25 00:08:17.813492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.813504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1476120) 00:28:12.365 [2024-07-25 00:08:17.813515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.365 [2024-07-25 00:08:17.813537] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cfb90, cid 7, qid 0 00:28:12.365 [2024-07-25 00:08:17.813698] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.365 [2024-07-25 00:08:17.813714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.365 [2024-07-25 00:08:17.813721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.813728] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cfb90) on tqpair=0x1476120 00:28:12.365 [2024-07-25 00:08:17.813774] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:12.365 [2024-07-25 00:08:17.813798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.365 [2024-07-25 00:08:17.813811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.365 [2024-07-25 00:08:17.813820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.365 [2024-07-25 00:08:17.813830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.365 [2024-07-25 00:08:17.813858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.813866] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.813873] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.365 [2024-07-25 00:08:17.813883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.365 [2024-07-25 00:08:17.813905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.365 [2024-07-25 00:08:17.814045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.365 [2024-07-25 00:08:17.814060] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.365 [2024-07-25 00:08:17.814067] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.814073] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.365 [2024-07-25 00:08:17.814087] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.365 [2024-07-25 00:08:17.814095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.818129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.818162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.818306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.818321] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.818328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818335] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.818344] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:12.366 [2024-07-25 00:08:17.818352] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:12.366 [2024-07-25 00:08:17.818371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818384] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.818402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.818423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.818566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.818582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.818588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.818615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.818642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.818662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.818805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.818820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.818826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818833] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.818853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.818869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.818880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.818900] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.819026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.819040] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.819047] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819053] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.819073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819083] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.819100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.819128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.819253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.819269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.819275] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.819301] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819311] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.819332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.819353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.819476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.819491] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.819497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.819523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.819550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.819571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.819695] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.819710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.819717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.819743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.819770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.819790] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.819914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.819929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.819935] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819942] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.819962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.819978] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.819988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.820009] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.820139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.820154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.820161] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.820187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.820218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.820239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.820365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.820380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.820386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.820413] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820422] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.366 [2024-07-25 00:08:17.820439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.366 [2024-07-25 00:08:17.820462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.366 [2024-07-25 00:08:17.820586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.366 [2024-07-25 00:08:17.820601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.366 [2024-07-25 00:08:17.820608] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.366 [2024-07-25 00:08:17.820614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.366 [2024-07-25 00:08:17.820634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.820643] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.820650] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.820661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.820681] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.820807] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.820822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.820828] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.820834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.820854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.820864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.820871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.820881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.820901] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.821053] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.821069] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.821075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821082] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.821107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.821139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.821161] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.821287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.821302] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.821309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.821335] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.821362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.821383] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.821513] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.821527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.821534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.821560] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.821587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.821609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.821736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.821750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.821757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.821783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821793] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.821810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.821831] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.821955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.821970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.821977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.821983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.822003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.822013] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.822019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.822030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.822054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.826113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.826130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.826137] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.826144] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.826166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.826176] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.826183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1476120) 00:28:12.367 [2024-07-25 00:08:17.826194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.367 [2024-07-25 00:08:17.826216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cf610, cid 3, qid 0 00:28:12.367 [2024-07-25 00:08:17.826351] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.367 [2024-07-25 00:08:17.826366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.367 [2024-07-25 00:08:17.826373] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.367 [2024-07-25 00:08:17.826379] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14cf610) on tqpair=0x1476120 00:28:12.367 [2024-07-25 00:08:17.826396] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:28:12.367 0 Kelvin (-273 Celsius) 00:28:12.367 Available Spare: 0% 00:28:12.367 Available Spare Threshold: 0% 00:28:12.367 Life Percentage Used: 0% 00:28:12.367 Data Units Read: 0 00:28:12.367 Data Units Written: 0 00:28:12.367 Host Read Commands: 0 00:28:12.367 Host Write Commands: 0 00:28:12.367 Controller Busy Time: 0 minutes 00:28:12.367 Power Cycles: 0 00:28:12.367 Power On Hours: 0 hours 00:28:12.367 Unsafe Shutdowns: 0 00:28:12.367 Unrecoverable Media Errors: 0 00:28:12.367 Lifetime Error Log Entries: 0 00:28:12.367 Warning Temperature Time: 0 minutes 00:28:12.367 Critical Temperature Time: 0 minutes 00:28:12.367 00:28:12.367 Number of Queues 00:28:12.367 ================ 00:28:12.367 Number of I/O Submission Queues: 127 00:28:12.367 Number of I/O Completion Queues: 127 00:28:12.367 00:28:12.367 Active Namespaces 00:28:12.367 ================= 00:28:12.367 Namespace ID:1 00:28:12.367 Error Recovery Timeout: Unlimited 00:28:12.367 Command Set Identifier: NVM (00h) 00:28:12.367 Deallocate: Supported 00:28:12.367 Deallocated/Unwritten Error: Not Supported 00:28:12.367 Deallocated Read Value: Unknown 00:28:12.367 Deallocate in Write Zeroes: Not Supported 00:28:12.367 Deallocated Guard Field: 0xFFFF 00:28:12.368 Flush: Supported 00:28:12.368 Reservation: Supported 00:28:12.368 Namespace Sharing Capabilities: Multiple Controllers 00:28:12.368 Size (in LBAs): 131072 (0GiB) 00:28:12.368 Capacity (in LBAs): 131072 (0GiB) 00:28:12.368 Utilization (in LBAs): 131072 (0GiB) 00:28:12.368 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:12.368 EUI64: ABCDEF0123456789 00:28:12.368 UUID: d414a001-e1e5-4850-b062-41143b1d78ee 00:28:12.368 Thin Provisioning: Not Supported 00:28:12.368 Per-NS Atomic Units: Yes 00:28:12.368 Atomic Boundary Size (Normal): 0 00:28:12.368 Atomic Boundary Size (PFail): 0 00:28:12.368 Atomic Boundary Offset: 0 00:28:12.368 Maximum Single Source Range Length: 65535 00:28:12.368 Maximum Copy Length: 65535 00:28:12.368 Maximum Source Range Count: 1 00:28:12.368 NGUID/EUI64 Never Reused: No 00:28:12.368 Namespace Write Protected: No 00:28:12.368 Number of LBA Formats: 1 00:28:12.368 Current LBA Format: LBA Format #00 00:28:12.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:12.368 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.368 rmmod nvme_tcp 00:28:12.368 rmmod nvme_fabrics 00:28:12.368 rmmod nvme_keyring 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 881181 ']' 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 881181 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 881181 ']' 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 881181 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 881181 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 881181' 00:28:12.368 killing process with pid 881181 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 881181 00:28:12.368 00:08:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 881181 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.628 00:08:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.166 00:08:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.166 00:28:15.166 real 0m5.320s 00:28:15.166 user 0m4.190s 00:28:15.166 sys 0m1.870s 00:28:15.166 00:08:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:15.166 00:08:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.166 ************************************ 00:28:15.166 END TEST nvmf_identify 00:28:15.166 ************************************ 00:28:15.166 00:08:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:15.166 00:08:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:15.166 00:08:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.166 00:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.166 ************************************ 00:28:15.166 START TEST nvmf_perf 00:28:15.166 ************************************ 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:15.166 * Looking for test storage... 00:28:15.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.166 00:08:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:17.070 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:17.070 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:17.070 Found net devices under 0000:09:00.0: cvl_0_0 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.070 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:17.071 Found net devices under 0000:09:00.1: cvl_0_1 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:28:17.071 00:28:17.071 --- 10.0.0.2 ping statistics --- 00:28:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.071 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:28:17.071 00:28:17.071 --- 10.0.0.1 ping statistics --- 00:28:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.071 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=883259 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 883259 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 883259 ']' 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:17.071 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.071 [2024-07-25 00:08:22.660398] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:17.071 [2024-07-25 00:08:22.660502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.071 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.071 [2024-07-25 00:08:22.724725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.367 [2024-07-25 00:08:22.819872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.367 [2024-07-25 00:08:22.819932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.367 [2024-07-25 00:08:22.819946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.367 [2024-07-25 00:08:22.819956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.367 [2024-07-25 00:08:22.819965] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.367 [2024-07-25 00:08:22.820081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.367 [2024-07-25 00:08:22.820152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.367 [2024-07-25 00:08:22.820212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.367 [2024-07-25 00:08:22.820216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:17.367 00:08:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:20.642 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:20.642 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:20.642 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:28:20.642 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:20.900 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:20.900 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:28:20.900 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:20.900 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:20.900 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:21.157 [2024-07-25 00:08:26.814075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.157 00:08:26 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.414 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:21.414 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.671 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:21.671 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:21.928 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.185 [2024-07-25 00:08:27.797723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.185 00:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.442 00:08:28 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:28:22.442 00:08:28 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:28:22.442 00:08:28 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:22.442 00:08:28 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:28:23.812 Initializing NVMe Controllers 00:28:23.812 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:28:23.812 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:28:23.812 Initialization complete. Launching workers. 00:28:23.812 ======================================================== 00:28:23.812 Latency(us) 00:28:23.812 Device Information : IOPS MiB/s Average min max 00:28:23.812 PCIE (0000:0b:00.0) NSID 1 from core 0: 85030.34 332.15 375.84 39.25 5380.09 00:28:23.812 ======================================================== 00:28:23.812 Total : 85030.34 332.15 375.84 39.25 5380.09 00:28:23.812 00:28:23.812 00:08:29 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:23.812 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.184 Initializing NVMe Controllers 00:28:25.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:25.184 Initialization complete. Launching workers. 00:28:25.184 ======================================================== 00:28:25.184 Latency(us) 00:28:25.184 Device Information : IOPS MiB/s Average min max 00:28:25.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.00 0.27 15184.51 185.01 45964.01 00:28:25.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15249.57 7937.87 50865.05 00:28:25.184 ======================================================== 00:28:25.184 Total : 134.00 0.52 15216.55 185.01 50865.05 00:28:25.184 00:28:25.184 00:08:30 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.184 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.117 Initializing NVMe Controllers 00:28:26.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:26.117 Initialization complete. Launching workers. 00:28:26.117 ======================================================== 00:28:26.117 Latency(us) 00:28:26.117 Device Information : IOPS MiB/s Average min max 00:28:26.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8271.91 32.31 3881.42 454.46 9692.96 00:28:26.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3867.49 15.11 8310.20 6535.02 16080.16 00:28:26.117 ======================================================== 00:28:26.117 Total : 12139.40 47.42 5292.38 454.46 16080.16 00:28:26.117 00:28:26.117 00:08:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:26.117 00:08:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:26.117 00:08:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.117 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.645 Initializing NVMe Controllers 00:28:28.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.645 Controller IO queue size 128, less than required. 00:28:28.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.645 Controller IO queue size 128, less than required. 00:28:28.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.645 Initialization complete. Launching workers. 00:28:28.645 ======================================================== 00:28:28.645 Latency(us) 00:28:28.645 Device Information : IOPS MiB/s Average min max 00:28:28.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1079.50 269.87 121132.70 79999.47 214426.82 00:28:28.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 566.50 141.62 232465.30 88609.82 347412.19 00:28:28.645 ======================================================== 00:28:28.645 Total : 1646.00 411.50 159449.78 79999.47 347412.19 00:28:28.645 00:28:28.645 00:08:34 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:28.645 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.645 No valid NVMe controllers or AIO or URING devices found 00:28:28.645 Initializing NVMe Controllers 00:28:28.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.645 Controller IO queue size 128, less than required. 00:28:28.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.645 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:28.645 Controller IO queue size 128, less than required. 00:28:28.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.645 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:28.645 WARNING: Some requested NVMe devices were skipped 00:28:28.645 00:08:34 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:28.645 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.926 Initializing NVMe Controllers 00:28:31.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.926 Controller IO queue size 128, less than required. 00:28:31.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.926 Controller IO queue size 128, less than required. 00:28:31.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:31.926 Initialization complete. Launching workers. 00:28:31.926 00:28:31.926 ==================== 00:28:31.926 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:31.926 TCP transport: 00:28:31.926 polls: 20012 00:28:31.926 idle_polls: 8240 00:28:31.926 sock_completions: 11772 00:28:31.926 nvme_completions: 4347 00:28:31.926 submitted_requests: 6616 00:28:31.926 queued_requests: 1 00:28:31.926 00:28:31.926 ==================== 00:28:31.926 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:31.926 TCP transport: 00:28:31.926 polls: 23519 00:28:31.926 idle_polls: 13975 00:28:31.926 sock_completions: 9544 00:28:31.926 nvme_completions: 4177 00:28:31.926 submitted_requests: 6256 00:28:31.926 queued_requests: 1 00:28:31.926 ======================================================== 00:28:31.926 Latency(us) 00:28:31.926 Device Information : IOPS MiB/s Average min max 00:28:31.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1086.46 271.62 121571.76 67392.94 196337.84 00:28:31.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1043.96 260.99 126542.58 70525.02 203388.79 00:28:31.926 ======================================================== 00:28:31.926 Total : 2130.43 532.61 124007.59 67392.94 203388.79 00:28:31.926 00:28:31.926 00:08:36 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:31.926 00:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.926 00:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:31.926 00:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:28:31.926 00:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=726d81ae-40f8-45fc-98d0-7b0af367473d 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 726d81ae-40f8-45fc-98d0-7b0af367473d 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=726d81ae-40f8-45fc-98d0-7b0af367473d 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:35.204 { 00:28:35.204 "uuid": "726d81ae-40f8-45fc-98d0-7b0af367473d", 00:28:35.204 "name": "lvs_0", 00:28:35.204 "base_bdev": "Nvme0n1", 00:28:35.204 "total_data_clusters": 238234, 00:28:35.204 "free_clusters": 238234, 00:28:35.204 "block_size": 512, 00:28:35.204 "cluster_size": 4194304 00:28:35.204 } 00:28:35.204 ]' 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="726d81ae-40f8-45fc-98d0-7b0af367473d") .free_clusters' 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:35.204 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="726d81ae-40f8-45fc-98d0-7b0af367473d") .cluster_size' 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:35.462 952936 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:35.462 00:08:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 726d81ae-40f8-45fc-98d0-7b0af367473d lbd_0 20480 00:28:35.719 00:08:41 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=635329e7-2f8a-4663-bb17-7da2483ce463 00:28:35.719 00:08:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 635329e7-2f8a-4663-bb17-7da2483ce463 lvs_n_0 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9538564d-49e5-4f48-9198-6b51584476bb 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9538564d-49e5-4f48-9198-6b51584476bb 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=9538564d-49e5-4f48-9198-6b51584476bb 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:36.651 { 00:28:36.651 "uuid": "726d81ae-40f8-45fc-98d0-7b0af367473d", 00:28:36.651 "name": "lvs_0", 00:28:36.651 "base_bdev": "Nvme0n1", 00:28:36.651 "total_data_clusters": 238234, 00:28:36.651 "free_clusters": 233114, 00:28:36.651 "block_size": 512, 00:28:36.651 "cluster_size": 4194304 00:28:36.651 }, 00:28:36.651 { 00:28:36.651 "uuid": "9538564d-49e5-4f48-9198-6b51584476bb", 00:28:36.651 "name": "lvs_n_0", 00:28:36.651 "base_bdev": "635329e7-2f8a-4663-bb17-7da2483ce463", 00:28:36.651 "total_data_clusters": 5114, 00:28:36.651 "free_clusters": 5114, 00:28:36.651 "block_size": 512, 00:28:36.651 "cluster_size": 4194304 00:28:36.651 } 00:28:36.651 ]' 00:28:36.651 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9538564d-49e5-4f48-9198-6b51584476bb") .free_clusters' 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9538564d-49e5-4f48-9198-6b51584476bb") .cluster_size' 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:36.908 20456 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:36.908 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9538564d-49e5-4f48-9198-6b51584476bb lbd_nest_0 20456 00:28:37.166 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9120ea94-4e94-4ad1-b4fb-7aaaf2eb4a57 00:28:37.166 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.423 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:37.423 00:08:42 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9120ea94-4e94-4ad1-b4fb-7aaaf2eb4a57 00:28:37.680 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.680 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:37.937 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:37.937 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:37.937 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:37.937 00:08:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.937 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.152 Initializing NVMe Controllers 00:28:50.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.152 Initialization complete. Launching workers. 00:28:50.152 ======================================================== 00:28:50.152 Latency(us) 00:28:50.152 Device Information : IOPS MiB/s Average min max 00:28:50.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.20 0.02 20368.31 219.66 45781.26 00:28:50.152 ======================================================== 00:28:50.152 Total : 49.20 0.02 20368.31 219.66 45781.26 00:28:50.152 00:28:50.152 00:08:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:50.152 00:08:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.152 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.114 Initializing NVMe Controllers 00:29:00.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.114 Initialization complete. Launching workers. 00:29:00.114 ======================================================== 00:29:00.114 Latency(us) 00:29:00.114 Device Information : IOPS MiB/s Average min max 00:29:00.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.80 8.60 14542.30 4030.37 51877.90 00:29:00.114 ======================================================== 00:29:00.114 Total : 68.80 8.60 14542.30 4030.37 51877.90 00:29:00.114 00:29:00.114 00:09:04 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:00.114 00:09:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:00.114 00:09:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:00.114 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.069 Initializing NVMe Controllers 00:29:10.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:10.069 Initialization complete. Launching workers. 00:29:10.069 ======================================================== 00:29:10.069 Latency(us) 00:29:10.069 Device Information : IOPS MiB/s Average min max 00:29:10.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7068.79 3.45 4527.31 288.09 12108.54 00:29:10.069 ======================================================== 00:29:10.069 Total : 7068.79 3.45 4527.31 288.09 12108.54 00:29:10.069 00:29:10.069 00:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:10.069 00:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.069 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.032 Initializing NVMe Controllers 00:29:20.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:20.032 Initialization complete. Launching workers. 00:29:20.032 ======================================================== 00:29:20.032 Latency(us) 00:29:20.032 Device Information : IOPS MiB/s Average min max 00:29:20.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2050.40 256.30 15627.02 930.98 37284.93 00:29:20.032 ======================================================== 00:29:20.032 Total : 2050.40 256.30 15627.02 930.98 37284.93 00:29:20.032 00:29:20.032 00:09:24 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:20.032 00:09:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:20.032 00:09:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.032 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.998 Initializing NVMe Controllers 00:29:29.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.998 Controller IO queue size 128, less than required. 00:29:29.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:29.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.998 Initialization complete. Launching workers. 00:29:29.998 ======================================================== 00:29:29.998 Latency(us) 00:29:29.998 Device Information : IOPS MiB/s Average min max 00:29:29.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11883.21 5.80 10776.10 1648.51 24735.43 00:29:29.998 ======================================================== 00:29:29.998 Total : 11883.21 5.80 10776.10 1648.51 24735.43 00:29:29.998 00:29:29.998 00:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.998 00:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.998 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.960 Initializing NVMe Controllers 00:29:39.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.960 Controller IO queue size 128, less than required. 00:29:39.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.960 Initialization complete. Launching workers. 00:29:39.960 ======================================================== 00:29:39.960 Latency(us) 00:29:39.960 Device Information : IOPS MiB/s Average min max 00:29:39.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.30 147.54 108562.31 15341.75 234288.02 00:29:39.960 ======================================================== 00:29:39.960 Total : 1180.30 147.54 108562.31 15341.75 234288.02 00:29:39.960 00:29:39.960 00:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.218 00:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9120ea94-4e94-4ad1-b4fb-7aaaf2eb4a57 00:29:41.150 00:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:41.150 00:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 635329e7-2f8a-4663-bb17-7da2483ce463 00:29:41.407 00:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.664 rmmod nvme_tcp 00:29:41.664 rmmod nvme_fabrics 00:29:41.664 rmmod nvme_keyring 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 883259 ']' 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 883259 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 883259 ']' 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 883259 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:41.664 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 883259 00:29:41.923 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:41.923 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:41.923 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 883259' 00:29:41.923 killing process with pid 883259 00:29:41.923 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 883259 00:29:41.923 00:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 883259 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.324 00:09:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.856 00:09:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.856 00:29:45.856 real 1m30.658s 00:29:45.856 user 5m33.158s 00:29:45.856 sys 0m15.752s 00:29:45.856 00:09:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:45.856 00:09:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.856 ************************************ 00:29:45.856 END TEST nvmf_perf 00:29:45.856 ************************************ 00:29:45.856 00:09:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:45.856 00:09:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:45.856 00:09:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:45.856 00:09:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.856 ************************************ 00:29:45.856 START TEST nvmf_fio_host 00:29:45.856 ************************************ 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:45.856 * Looking for test storage... 00:29:45.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.856 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.857 00:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:47.758 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:47.758 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.758 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:47.759 Found net devices under 0000:09:00.0: cvl_0_0 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:47.759 Found net devices under 0000:09:00.1: cvl_0_1 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:47.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:29:47.759 00:29:47.759 --- 10.0.0.2 ping statistics --- 00:29:47.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.759 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:47.759 00:29:47.759 --- 10.0.0.1 ping statistics --- 00:29:47.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.759 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=895832 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 895832 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 895832 ']' 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:47.759 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.759 [2024-07-25 00:09:53.301245] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:29:47.759 [2024-07-25 00:09:53.301330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.759 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.759 [2024-07-25 00:09:53.368445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.759 [2024-07-25 00:09:53.461316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.759 [2024-07-25 00:09:53.461374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.759 [2024-07-25 00:09:53.461391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.759 [2024-07-25 00:09:53.461415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.759 [2024-07-25 00:09:53.461427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.759 [2024-07-25 00:09:53.461513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.759 [2024-07-25 00:09:53.461568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.759 [2024-07-25 00:09:53.461683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.759 [2024-07-25 00:09:53.461685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.016 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:48.016 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:48.016 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.274 [2024-07-25 00:09:53.860783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.274 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:48.274 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.274 00:09:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.274 00:09:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:48.531 Malloc1 00:29:48.531 00:09:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.788 00:09:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:49.045 00:09:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.302 [2024-07-25 00:09:54.887423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.302 00:09:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:49.559 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:49.560 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:49.560 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:49.560 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:49.560 00:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.817 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:49.817 fio-3.35 00:29:49.817 Starting 1 thread 00:29:49.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.343 00:29:52.343 test: (groupid=0, jobs=1): err= 0: pid=896188: Thu Jul 25 00:09:57 2024 00:29:52.343 read: IOPS=9254, BW=36.2MiB/s (37.9MB/s)(72.5MiB/2006msec) 00:29:52.343 slat (nsec): min=1901, max=153079, avg=2513.93, stdev=1838.68 00:29:52.343 clat (usec): min=2637, max=12616, avg=7652.93, stdev=556.14 00:29:52.343 lat (usec): min=2663, max=12618, avg=7655.44, stdev=556.02 00:29:52.343 clat percentiles (usec): 00:29:52.343 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:29:52.343 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:29:52.343 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8455], 00:29:52.343 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[11076], 99.95th=[12256], 00:29:52.343 | 99.99th=[12649] 00:29:52.343 bw ( KiB/s): min=36016, max=37504, per=99.93%, avg=36994.00, stdev=665.06, samples=4 00:29:52.343 iops : min= 9004, max= 9376, avg=9248.50, stdev=166.26, samples=4 00:29:52.343 write: IOPS=9260, BW=36.2MiB/s (37.9MB/s)(72.6MiB/2006msec); 0 zone resets 00:29:52.343 slat (usec): min=2, max=132, avg= 2.60, stdev= 1.36 00:29:52.343 clat (usec): min=1436, max=11835, avg=6130.06, stdev=489.82 00:29:52.343 lat (usec): min=1445, max=11837, avg=6132.66, stdev=489.78 00:29:52.343 clat percentiles (usec): 00:29:52.343 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:29:52.343 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:29:52.343 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:29:52.343 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 9896], 99.95th=[10945], 00:29:52.343 | 99.99th=[11731] 00:29:52.343 bw ( KiB/s): min=36792, max=37312, per=99.98%, avg=37034.00, stdev=260.90, samples=4 00:29:52.343 iops : min= 9198, max= 9328, avg=9258.50, stdev=65.23, samples=4 00:29:52.343 lat (msec) : 2=0.01%, 4=0.11%, 10=99.77%, 20=0.12% 00:29:52.343 cpu : usr=57.16%, sys=36.71%, ctx=61, majf=0, minf=36 00:29:52.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:52.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:52.343 issued rwts: total=18565,18576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:52.343 00:29:52.343 Run status group 0 (all jobs): 00:29:52.343 READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.5MiB (76.0MB), run=2006-2006msec 00:29:52.343 WRITE: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:52.343 00:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.343 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:52.343 fio-3.35 00:29:52.343 Starting 1 thread 00:29:52.343 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.871 00:29:54.871 test: (groupid=0, jobs=1): err= 0: pid=896526: Thu Jul 25 00:10:00 2024 00:29:54.871 read: IOPS=8567, BW=134MiB/s (140MB/s)(269MiB/2007msec) 00:29:54.871 slat (nsec): min=2920, max=89787, avg=3576.83, stdev=1561.27 00:29:54.871 clat (usec): min=2563, max=17953, avg=8882.26, stdev=2009.94 00:29:54.871 lat (usec): min=2567, max=17956, avg=8885.84, stdev=2009.96 00:29:54.871 clat percentiles (usec): 00:29:54.871 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7177], 00:29:54.871 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:29:54.871 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11338], 95.00th=[12256], 00:29:54.871 | 99.00th=[14222], 99.50th=[15533], 99.90th=[17695], 99.95th=[17695], 00:29:54.871 | 99.99th=[17957] 00:29:54.871 bw ( KiB/s): min=59424, max=75808, per=50.71%, avg=69512.00, stdev=7742.39, samples=4 00:29:54.871 iops : min= 3714, max= 4738, avg=4344.50, stdev=483.90, samples=4 00:29:54.871 write: IOPS=5064, BW=79.1MiB/s (83.0MB/s)(142MiB/1793msec); 0 zone resets 00:29:54.871 slat (usec): min=30, max=144, avg=33.20, stdev= 4.75 00:29:54.871 clat (usec): min=5031, max=19451, avg=10841.78, stdev=1892.44 00:29:54.871 lat (usec): min=5063, max=19482, avg=10874.98, stdev=1892.44 00:29:54.871 clat percentiles (usec): 00:29:54.871 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:29:54.871 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:29:54.871 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13304], 95.00th=[14484], 00:29:54.871 | 99.00th=[15926], 99.50th=[16450], 99.90th=[18744], 99.95th=[19268], 00:29:54.871 | 99.99th=[19530] 00:29:54.871 bw ( KiB/s): min=60352, max=80480, per=89.37%, avg=72416.00, stdev=9143.75, samples=4 00:29:54.871 iops : min= 3772, max= 5030, avg=4526.00, stdev=571.48, samples=4 00:29:54.871 lat (msec) : 4=0.12%, 10=58.97%, 20=40.91% 00:29:54.871 cpu : usr=74.38%, sys=21.98%, ctx=22, majf=0, minf=68 00:29:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:54.871 issued rwts: total=17195,9080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:54.871 00:29:54.871 Run status group 0 (all jobs): 00:29:54.871 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=269MiB (282MB), run=2007-2007msec 00:29:54.871 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=142MiB (149MB), run=1793-1793msec 00:29:54.871 00:10:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:29:55.129 00:10:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:29:58.406 Nvme0n1 00:29:58.406 00:10:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=172aaf41-b3d2-42e1-8b22-0d1f89de37ec 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 172aaf41-b3d2-42e1-8b22-0d1f89de37ec 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=172aaf41-b3d2-42e1-8b22-0d1f89de37ec 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:01.682 { 00:30:01.682 "uuid": "172aaf41-b3d2-42e1-8b22-0d1f89de37ec", 00:30:01.682 "name": "lvs_0", 00:30:01.682 "base_bdev": "Nvme0n1", 00:30:01.682 "total_data_clusters": 930, 00:30:01.682 "free_clusters": 930, 00:30:01.682 "block_size": 512, 00:30:01.682 "cluster_size": 1073741824 00:30:01.682 } 00:30:01.682 ]' 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="172aaf41-b3d2-42e1-8b22-0d1f89de37ec") .free_clusters' 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="172aaf41-b3d2-42e1-8b22-0d1f89de37ec") .cluster_size' 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:01.682 952320 00:30:01.682 00:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:01.682 1b145a90-2184-4c13-9397-93b03a5510fc 00:30:01.682 00:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:01.941 00:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:02.199 00:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:02.457 00:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.716 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:02.716 fio-3.35 00:30:02.716 Starting 1 thread 00:30:02.716 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.245 00:30:05.245 test: (groupid=0, jobs=1): err= 0: pid=897803: Thu Jul 25 00:10:10 2024 00:30:05.245 read: IOPS=6158, BW=24.1MiB/s (25.2MB/s)(48.3MiB/2008msec) 00:30:05.245 slat (usec): min=2, max=145, avg= 2.72, stdev= 2.02 00:30:05.245 clat (usec): min=779, max=171765, avg=11470.83, stdev=11535.13 00:30:05.245 lat (usec): min=783, max=171810, avg=11473.55, stdev=11535.40 00:30:05.245 clat percentiles (msec): 00:30:05.245 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:30:05.245 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:05.245 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:30:05.245 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:05.245 | 99.99th=[ 171] 00:30:05.245 bw ( KiB/s): min=17376, max=27352, per=99.86%, avg=24600.00, stdev=4823.07, samples=4 00:30:05.245 iops : min= 4344, max= 6838, avg=6150.00, stdev=1205.77, samples=4 00:30:05.245 write: IOPS=6140, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec); 0 zone resets 00:30:05.245 slat (usec): min=2, max=114, avg= 2.81, stdev= 1.46 00:30:05.245 clat (usec): min=358, max=169418, avg=9202.34, stdev=10813.79 00:30:05.245 lat (usec): min=361, max=169424, avg=9205.14, stdev=10814.05 00:30:05.245 clat percentiles (msec): 00:30:05.245 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:05.245 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:05.245 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:05.245 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:05.245 | 99.99th=[ 169] 00:30:05.245 bw ( KiB/s): min=18408, max=26672, per=99.90%, avg=24540.00, stdev=4088.54, samples=4 00:30:05.245 iops : min= 4602, max= 6668, avg=6135.00, stdev=1022.13, samples=4 00:30:05.245 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:05.245 lat (msec) : 2=0.03%, 4=0.12%, 10=60.06%, 20=39.25%, 250=0.52% 00:30:05.245 cpu : usr=55.85%, sys=39.61%, ctx=99, majf=0, minf=36 00:30:05.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:05.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:05.245 issued rwts: total=12366,12331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:05.245 00:30:05.245 Run status group 0 (all jobs): 00:30:05.245 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2008-2008msec 00:30:05.245 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.5MB), run=2008-2008msec 00:30:05.245 00:10:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:05.245 00:10:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=78ecfab0-f559-43c4-8e12-afe75fef738d 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 78ecfab0-f559-43c4-8e12-afe75fef738d 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=78ecfab0-f559-43c4-8e12-afe75fef738d 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:06.617 { 00:30:06.617 "uuid": "172aaf41-b3d2-42e1-8b22-0d1f89de37ec", 00:30:06.617 "name": "lvs_0", 00:30:06.617 "base_bdev": "Nvme0n1", 00:30:06.617 "total_data_clusters": 930, 00:30:06.617 "free_clusters": 0, 00:30:06.617 "block_size": 512, 00:30:06.617 "cluster_size": 1073741824 00:30:06.617 }, 00:30:06.617 { 00:30:06.617 "uuid": "78ecfab0-f559-43c4-8e12-afe75fef738d", 00:30:06.617 "name": "lvs_n_0", 00:30:06.617 "base_bdev": "1b145a90-2184-4c13-9397-93b03a5510fc", 00:30:06.617 "total_data_clusters": 237847, 00:30:06.617 "free_clusters": 237847, 00:30:06.617 "block_size": 512, 00:30:06.617 "cluster_size": 4194304 00:30:06.617 } 00:30:06.617 ]' 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="78ecfab0-f559-43c4-8e12-afe75fef738d") .free_clusters' 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="78ecfab0-f559-43c4-8e12-afe75fef738d") .cluster_size' 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:06.617 951388 00:30:06.617 00:10:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:07.547 2bd50712-73fb-47c5-9965-7378f2d4c058 00:30:07.547 00:10:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:07.547 00:10:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:07.829 00:10:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:08.085 00:10:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.342 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:08.342 fio-3.35 00:30:08.342 Starting 1 thread 00:30:08.342 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.865 00:30:10.865 test: (groupid=0, jobs=1): err= 0: pid=898649: Thu Jul 25 00:10:16 2024 00:30:10.865 read: IOPS=5961, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec) 00:30:10.865 slat (nsec): min=1912, max=160978, avg=2392.58, stdev=1917.22 00:30:10.865 clat (usec): min=4400, max=20793, avg=11911.35, stdev=1019.25 00:30:10.865 lat (usec): min=4404, max=20795, avg=11913.75, stdev=1019.15 00:30:10.865 clat percentiles (usec): 00:30:10.865 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:30:10.865 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:30:10.865 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:30:10.865 | 99.00th=[14091], 99.50th=[14353], 99.90th=[19792], 99.95th=[19792], 00:30:10.865 | 99.99th=[20841] 00:30:10.865 bw ( KiB/s): min=22872, max=24168, per=99.74%, avg=23782.00, stdev=610.53, samples=4 00:30:10.865 iops : min= 5718, max= 6042, avg=5945.50, stdev=152.63, samples=4 00:30:10.865 write: IOPS=5949, BW=23.2MiB/s (24.4MB/s)(46.7MiB/2008msec); 0 zone resets 00:30:10.865 slat (nsec): min=2019, max=90243, avg=2493.90, stdev=1174.17 00:30:10.865 clat (usec): min=2305, max=17066, avg=9471.09, stdev=868.93 00:30:10.865 lat (usec): min=2311, max=17068, avg=9473.58, stdev=868.89 00:30:10.865 clat percentiles (usec): 00:30:10.865 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:30:10.865 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:30:10.865 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:30:10.865 | 99.00th=[11338], 99.50th=[11600], 99.90th=[14484], 99.95th=[15926], 00:30:10.865 | 99.99th=[16909] 00:30:10.865 bw ( KiB/s): min=23744, max=23872, per=100.00%, avg=23798.00, stdev=55.95, samples=4 00:30:10.865 iops : min= 5936, max= 5968, avg=5949.50, stdev=13.99, samples=4 00:30:10.865 lat (msec) : 4=0.05%, 10=38.73%, 20=61.21%, 50=0.01% 00:30:10.865 cpu : usr=56.70%, sys=38.96%, ctx=50, majf=0, minf=36 00:30:10.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:10.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:10.865 issued rwts: total=11970,11947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:10.865 00:30:10.865 Run status group 0 (all jobs): 00:30:10.865 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.0MB), run=2008-2008msec 00:30:10.865 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (48.9MB), run=2008-2008msec 00:30:10.865 00:10:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:11.123 00:10:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:11.123 00:10:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:15.304 00:10:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:15.304 00:10:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:18.586 00:10:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:18.586 00:10:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:19.955 rmmod nvme_tcp 00:30:19.955 rmmod nvme_fabrics 00:30:19.955 rmmod nvme_keyring 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 895832 ']' 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 895832 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 895832 ']' 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 895832 00:30:19.955 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 895832 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 895832' 00:30:20.214 killing process with pid 895832 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 895832 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 895832 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:20.214 00:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.749 00:10:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:22.749 00:30:22.749 real 0m36.934s 00:30:22.749 user 2m21.215s 00:30:22.749 sys 0m7.001s 00:30:22.749 00:10:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:22.749 00:10:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.749 ************************************ 00:30:22.749 END TEST nvmf_fio_host 00:30:22.749 ************************************ 00:30:22.749 00:10:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:22.749 00:10:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:22.749 00:10:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:22.749 00:10:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.749 ************************************ 00:30:22.749 START TEST nvmf_failover 00:30:22.749 ************************************ 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:22.749 * Looking for test storage... 00:30:22.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:22.749 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:22.750 00:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:24.655 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:24.655 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:24.655 Found net devices under 0000:09:00.0: cvl_0_0 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:24.655 Found net devices under 0000:09:00.1: cvl_0_1 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.655 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.656 00:10:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:24.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:30:24.656 00:30:24.656 --- 10.0.0.2 ping statistics --- 00:30:24.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.656 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:30:24.656 00:30:24.656 --- 10.0.0.1 ping statistics --- 00:30:24.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.656 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=901891 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 901891 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 901891 ']' 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:24.656 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.656 [2024-07-25 00:10:30.145169] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:24.656 [2024-07-25 00:10:30.145246] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.656 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.656 [2024-07-25 00:10:30.218251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:24.656 [2024-07-25 00:10:30.311368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.656 [2024-07-25 00:10:30.311440] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.656 [2024-07-25 00:10:30.311456] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.656 [2024-07-25 00:10:30.311469] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.656 [2024-07-25 00:10:30.311481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.656 [2024-07-25 00:10:30.311567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.656 [2024-07-25 00:10:30.311688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.656 [2024-07-25 00:10:30.311691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.914 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.915 00:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:25.173 [2024-07-25 00:10:30.660169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.173 00:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:25.431 Malloc0 00:30:25.431 00:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.689 00:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:25.947 00:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.205 [2024-07-25 00:10:31.670138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.205 00:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:26.463 [2024-07-25 00:10:31.930882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:26.463 00:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.463 [2024-07-25 00:10:32.167660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:26.721 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=902068 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 902068 /var/tmp/bdevperf.sock 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 902068 ']' 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:26.722 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.980 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:26.980 00:10:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:26.980 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.238 NVMe0n1 00:30:27.238 00:10:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.806 00:30:27.806 00:10:33 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=902196 00:30:27.806 00:10:33 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:27.806 00:10:33 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:28.741 00:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.999 [2024-07-25 00:10:34.496963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:28.999 [2024-07-25 00:10:34.497414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:29.000 [2024-07-25 00:10:34.497426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:29.000 [2024-07-25 00:10:34.497437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:29.000 [2024-07-25 00:10:34.497449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0090 is same with the state(5) to be set 00:30:29.000 00:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:32.300 00:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.300 00:30:32.300 00:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:32.558 00:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:35.838 00:10:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.838 [2024-07-25 00:10:41.423541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.838 00:10:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:36.769 00:10:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:37.027 00:10:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 902196 00:30:43.588 0 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 902068 ']' 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 902068' 00:30:43.588 killing process with pid 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 902068 00:30:43.588 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:43.588 [2024-07-25 00:10:32.228846] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:43.588 [2024-07-25 00:10:32.228955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902068 ] 00:30:43.588 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.588 [2024-07-25 00:10:32.292137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.588 [2024-07-25 00:10:32.379754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.588 Running I/O for 15 seconds... 00:30:43.588 [2024-07-25 00:10:34.497698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.588 [2024-07-25 00:10:34.497740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.497757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.588 [2024-07-25 00:10:34.497772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.497786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.588 [2024-07-25 00:10:34.497799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.497813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.588 [2024-07-25 00:10:34.497827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.497840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc28740 is same with the state(5) to be set 00:30:43.588 [2024-07-25 00:10:34.497921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.497966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.497982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.588 [2024-07-25 00:10:34.498645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.588 [2024-07-25 00:10:34.498660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.498983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.498997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.589 [2024-07-25 00:10:34.499775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.589 [2024-07-25 00:10:34.499790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.499806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.499842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.499871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.499899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.499928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.499956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.499971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.499985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.590 [2024-07-25 00:10:34.500573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.590 [2024-07-25 00:10:34.500918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.590 [2024-07-25 00:10:34.500932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:34.500945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.500960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:34.500973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:34.501002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:34.501030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:34.501058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:34.501754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.591 [2024-07-25 00:10:34.501797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.591 [2024-07-25 00:10:34.501809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:30:43.591 [2024-07-25 00:10:34.501821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:34.501882] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc47ef0 was disconnected and freed. reset controller. 00:30:43.591 [2024-07-25 00:10:34.501899] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:43.591 [2024-07-25 00:10:34.501914] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:43.591 [2024-07-25 00:10:34.505228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:43.591 [2024-07-25 00:10:34.505264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28740 (9): Bad file descriptor 00:30:43.591 [2024-07-25 00:10:34.535839] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:43.591 [2024-07-25 00:10:38.123738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.591 [2024-07-25 00:10:38.123799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:38.123817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.591 [2024-07-25 00:10:38.123840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:38.123854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.591 [2024-07-25 00:10:38.123867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:38.123881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.591 [2024-07-25 00:10:38.123894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:38.123906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc28740 is same with the state(5) to be set 00:30:43.591 [2024-07-25 00:10:38.126508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.591 [2024-07-25 00:10:38.126534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.591 [2024-07-25 00:10:38.126574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.591 [2024-07-25 00:10:38.126589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.126983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.126998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.592 [2024-07-25 00:10:38.127401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.592 [2024-07-25 00:10:38.127430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.592 [2024-07-25 00:10:38.127458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.592 [2024-07-25 00:10:38.127486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.592 [2024-07-25 00:10:38.127500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.127982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.593 [2024-07-25 00:10:38.128616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.593 [2024-07-25 00:10:38.128629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.594 [2024-07-25 00:10:38.128803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.128850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.128864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.128894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.128905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.128918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.128942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.128953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.128965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.128978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.128989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.128999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:30:43.594 [2024-07-25 00:10:38.129724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.594 [2024-07-25 00:10:38.129736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.594 [2024-07-25 00:10:38.129747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.594 [2024-07-25 00:10:38.129758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.129770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.129783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.129793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.129804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.129816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.129829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.129840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.129851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.129863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.129879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.129891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.129901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.129914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.129926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.129937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.129948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.129961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.129973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.129984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.129995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.595 [2024-07-25 00:10:38.130818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.595 [2024-07-25 00:10:38.130829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:30:43.595 [2024-07-25 00:10:38.130842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.595 [2024-07-25 00:10:38.130855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.130866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.130877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.130902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.130912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.130923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.130936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.130948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.130959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.130970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.130982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.596 [2024-07-25 00:10:38.131304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.596 [2024-07-25 00:10:38.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:30:43.596 [2024-07-25 00:10:38.131328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:38.131388] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4a130 was disconnected and freed. reset controller. 00:30:43.596 [2024-07-25 00:10:38.131406] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:43.596 [2024-07-25 00:10:38.131421] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:43.596 [2024-07-25 00:10:38.134663] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:43.596 [2024-07-25 00:10:38.134705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28740 (9): Bad file descriptor 00:30:43.596 [2024-07-25 00:10:38.219496] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:43.596 [2024-07-25 00:10:42.670343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.596 [2024-07-25 00:10:42.670941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.596 [2024-07-25 00:10:42.670956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.670971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.670984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.670999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.597 [2024-07-25 00:10:42.671804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.597 [2024-07-25 00:10:42.671833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.597 [2024-07-25 00:10:42.671879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.597 [2024-07-25 00:10:42.671894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.671909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.671922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.671937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.671951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.671966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.671979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.671994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.672984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.598 [2024-07-25 00:10:42.672998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.598 [2024-07-25 00:10:42.673012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.599 [2024-07-25 00:10:42.673246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.599 [2024-07-25 00:10:42.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.599 [2024-07-25 00:10:42.673409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.599 [2024-07-25 00:10:42.673436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.599 [2024-07-25 00:10:42.673462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc28740 is same with the state(5) to be set 00:30:43.599 [2024-07-25 00:10:42.673702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21736 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21768 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.673956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.673967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.673978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21776 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.673990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21784 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20848 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20856 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20872 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.599 [2024-07-25 00:10:42.674374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20880 len:8 PRP1 0x0 PRP2 0x0 00:30:43.599 [2024-07-25 00:10:42.674394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.599 [2024-07-25 00:10:42.674408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.599 [2024-07-25 00:10:42.674419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20888 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20904 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20912 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20936 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20944 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20952 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21800 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21808 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21816 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.674963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.674976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.674986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.674997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21832 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21840 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20976 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21008 len:8 PRP1 0x0 PRP2 0x0 00:30:43.600 [2024-07-25 00:10:42.675451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.600 [2024-07-25 00:10:42.675464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.600 [2024-07-25 00:10:42.675475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.600 [2024-07-25 00:10:42.675485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21016 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21032 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21048 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.675961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.675972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.675984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.675996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.601 [2024-07-25 00:10:42.676554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.601 [2024-07-25 00:10:42.676565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:30:43.601 [2024-07-25 00:10:42.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.601 [2024-07-25 00:10:42.676590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.676953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.676966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.676977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.676988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.677288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.677301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.677313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.677324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.683242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.683273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.683291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.683303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.683320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.683333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.683346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.683357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.683369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.683394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.683405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.602 [2024-07-25 00:10:42.683417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20824 len:8 PRP1 0x0 PRP2 0x0 00:30:43.602 [2024-07-25 00:10:42.683430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.602 [2024-07-25 00:10:42.683443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.602 [2024-07-25 00:10:42.683454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21360 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21368 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21392 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21416 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.683961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.683972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.683983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21424 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.683995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21432 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21448 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21456 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21480 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21488 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:30:43.603 [2024-07-25 00:10:42.684478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.603 [2024-07-25 00:10:42.684491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.603 [2024-07-25 00:10:42.684502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.603 [2024-07-25 00:10:42.684513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.684960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.684970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.684981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.684993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21624 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21640 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21672 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21680 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.604 [2024-07-25 00:10:42.685574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.604 [2024-07-25 00:10:42.685585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21688 len:8 PRP1 0x0 PRP2 0x0 00:30:43.604 [2024-07-25 00:10:42.685597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.604 [2024-07-25 00:10:42.685609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.605 [2024-07-25 00:10:42.685622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.605 [2024-07-25 00:10:42.685634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:8 PRP1 0x0 PRP2 0x0 00:30:43.605 [2024-07-25 00:10:42.685646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.605 [2024-07-25 00:10:42.685658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.605 [2024-07-25 00:10:42.685669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.605 [2024-07-25 00:10:42.685679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:30:43.605 [2024-07-25 00:10:42.685692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.605 [2024-07-25 00:10:42.685704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.605 [2024-07-25 00:10:42.685715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.605 [2024-07-25 00:10:42.685725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21712 len:8 PRP1 0x0 PRP2 0x0 00:30:43.605 [2024-07-25 00:10:42.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.605 [2024-07-25 00:10:42.685755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.605 [2024-07-25 00:10:42.685766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.605 [2024-07-25 00:10:42.685776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21720 len:8 PRP1 0x0 PRP2 0x0 00:30:43.605 [2024-07-25 00:10:42.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.605 [2024-07-25 00:10:42.685801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:43.605 [2024-07-25 00:10:42.685811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:43.605 [2024-07-25 00:10:42.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:30:43.605 [2024-07-25 00:10:42.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.605 [2024-07-25 00:10:42.685892] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4bff0 was disconnected and freed. reset controller. 00:30:43.605 [2024-07-25 00:10:42.685909] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:43.605 [2024-07-25 00:10:42.685924] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:43.605 [2024-07-25 00:10:42.685979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28740 (9): Bad file descriptor 00:30:43.605 [2024-07-25 00:10:42.689254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:43.605 [2024-07-25 00:10:42.847280] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:43.605 00:30:43.605 Latency(us) 00:30:43.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.605 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:43.605 Verification LBA range: start 0x0 length 0x4000 00:30:43.605 NVMe0n1 : 15.01 8615.86 33.66 703.88 0.00 13707.16 831.34 21359.88 00:30:43.605 =================================================================================================================== 00:30:43.605 Total : 8615.86 33.66 703.88 0.00 13707.16 831.34 21359.88 00:30:43.605 Received shutdown signal, test time was about 15.000000 seconds 00:30:43.605 00:30:43.605 Latency(us) 00:30:43.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.605 =================================================================================================================== 00:30:43.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=904030 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 904030 /var/tmp/bdevperf.sock 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 904030 ']' 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:43.605 00:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.605 [2024-07-25 00:10:49.183008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:43.605 00:10:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:43.863 [2024-07-25 00:10:49.423676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:43.863 00:10:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.120 NVMe0n1 00:30:44.120 00:10:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.685 00:30:44.685 00:10:50 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.942 00:30:44.942 00:10:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:44.942 00:10:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:45.200 00:10:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.457 00:10:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:48.730 00:10:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:48.730 00:10:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:48.730 00:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=904699 00:30:48.730 00:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:48.730 00:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 904699 00:30:49.663 0 00:30:49.663 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.663 [2024-07-25 00:10:48.703957] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:49.663 [2024-07-25 00:10:48.704048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904030 ] 00:30:49.663 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.663 [2024-07-25 00:10:48.763486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.663 [2024-07-25 00:10:48.848404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.663 [2024-07-25 00:10:50.956183] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:49.663 [2024-07-25 00:10:50.956271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.663 [2024-07-25 00:10:50.956294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.663 [2024-07-25 00:10:50.956311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.663 [2024-07-25 00:10:50.956325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.663 [2024-07-25 00:10:50.956339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.663 [2024-07-25 00:10:50.956352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.663 [2024-07-25 00:10:50.956366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.663 [2024-07-25 00:10:50.956380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.663 [2024-07-25 00:10:50.956394] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.663 [2024-07-25 00:10:50.956440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.663 [2024-07-25 00:10:50.956471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8f740 (9): Bad file descriptor 00:30:49.663 [2024-07-25 00:10:50.964271] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.663 Running I/O for 1 seconds... 00:30:49.663 00:30:49.663 Latency(us) 00:30:49.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.663 Verification LBA range: start 0x0 length 0x4000 00:30:49.663 NVMe0n1 : 1.01 9047.92 35.34 0.00 0.00 14089.45 1614.13 13010.11 00:30:49.663 =================================================================================================================== 00:30:49.663 Total : 9047.92 35.34 0.00 0.00 14089.45 1614.13 13010.11 00:30:49.663 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.663 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:49.921 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.178 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.178 00:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:50.435 00:10:56 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.692 00:10:56 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:53.968 00:10:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.968 00:10:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 904030 ']' 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 904030' 00:30:54.225 killing process with pid 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 904030 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:54.225 00:10:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.482 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.482 rmmod nvme_tcp 00:30:54.740 rmmod nvme_fabrics 00:30:54.740 rmmod nvme_keyring 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 901891 ']' 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 901891 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 901891 ']' 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 901891 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 901891 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 901891' 00:30:54.740 killing process with pid 901891 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 901891 00:30:54.740 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 901891 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.999 00:11:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.932 00:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:56.932 00:30:56.932 real 0m34.545s 00:30:56.932 user 2m1.790s 00:30:56.932 sys 0m5.834s 00:30:56.932 00:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:56.932 00:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:56.932 ************************************ 00:30:56.932 END TEST nvmf_failover 00:30:56.932 ************************************ 00:30:56.932 00:11:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:56.932 00:11:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:56.932 00:11:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.932 00:11:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.932 ************************************ 00:30:56.932 START TEST nvmf_host_discovery 00:30:56.932 ************************************ 00:30:56.932 00:11:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:57.192 * Looking for test storage... 00:30:57.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:57.192 00:11:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.097 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:59.098 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:59.098 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:59.098 Found net devices under 0000:09:00.0: cvl_0_0 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:59.098 Found net devices under 0000:09:00.1: cvl_0_1 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:30:59.098 00:30:59.098 --- 10.0.0.2 ping statistics --- 00:30:59.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.098 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:30:59.098 00:30:59.098 --- 10.0.0.1 ping statistics --- 00:30:59.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.098 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.098 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=907295 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 907295 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 907295 ']' 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:59.099 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.099 [2024-07-25 00:11:04.723060] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:59.099 [2024-07-25 00:11:04.723165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.099 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.099 [2024-07-25 00:11:04.784698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.357 [2024-07-25 00:11:04.870745] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.357 [2024-07-25 00:11:04.870795] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.357 [2024-07-25 00:11:04.870823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.357 [2024-07-25 00:11:04.870835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.357 [2024-07-25 00:11:04.870845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.357 [2024-07-25 00:11:04.870870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.357 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:59.357 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:59.357 00:11:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.357 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.357 00:11:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 [2024-07-25 00:11:05.007140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 [2024-07-25 00:11:05.015297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 null0 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 null1 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:59.357 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=907392 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 907392 /tmp/host.sock 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 907392 ']' 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:59.358 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:59.358 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.616 [2024-07-25 00:11:05.092794] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:59.616 [2024-07-25 00:11:05.092884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907392 ] 00:30:59.616 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.616 [2024-07-25 00:11:05.153822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.616 [2024-07-25 00:11:05.246829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.873 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:59.874 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 [2024-07-25 00:11:05.657054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:00.132 00:11:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:01.064 [2024-07-25 00:11:06.435313] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:01.065 [2024-07-25 00:11:06.435355] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:01.065 [2024-07-25 00:11:06.435394] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:01.065 [2024-07-25 00:11:06.521661] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:01.065 [2024-07-25 00:11:06.746889] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:01.065 [2024-07-25 00:11:06.746922] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.322 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.580 [2024-07-25 00:11:07.121509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.580 [2024-07-25 00:11:07.121928] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:01.580 [2024-07-25 00:11:07.121968] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.580 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.581 [2024-07-25 00:11:07.210232] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:01.581 00:11:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:01.838 [2024-07-25 00:11:07.510777] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:01.838 [2024-07-25 00:11:07.510807] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:01.838 [2024-07-25 00:11:07.510817] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.772 [2024-07-25 00:11:08.353416] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:02.772 [2024-07-25 00:11:08.353447] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.772 [2024-07-25 00:11:08.358001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.772 [2024-07-25 00:11:08.358037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.772 [2024-07-25 00:11:08.358055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.772 [2024-07-25 00:11:08.358070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.772 [2024-07-25 00:11:08.358095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.772 [2024-07-25 00:11:08.358119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.772 [2024-07-25 00:11:08.358136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.772 [2024-07-25 00:11:08.358166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.772 [2024-07-25 00:11:08.358179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.772 [2024-07-25 00:11:08.368004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.772 [2024-07-25 00:11:08.378046] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.772 [2024-07-25 00:11:08.378314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.772 [2024-07-25 00:11:08.378344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.772 [2024-07-25 00:11:08.378360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.772 [2024-07-25 00:11:08.378383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.772 [2024-07-25 00:11:08.378404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.772 [2024-07-25 00:11:08.378424] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.772 [2024-07-25 00:11:08.378439] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.772 [2024-07-25 00:11:08.378460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.772 [2024-07-25 00:11:08.388152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.772 [2024-07-25 00:11:08.388378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.772 [2024-07-25 00:11:08.388414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.772 [2024-07-25 00:11:08.388430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.772 [2024-07-25 00:11:08.388452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.772 [2024-07-25 00:11:08.388472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.772 [2024-07-25 00:11:08.388486] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.772 [2024-07-25 00:11:08.388498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.772 [2024-07-25 00:11:08.388517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:02.772 [2024-07-25 00:11:08.398239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:02.772 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:02.772 [2024-07-25 00:11:08.398452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.772 [2024-07-25 00:11:08.398481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.772 [2024-07-25 00:11:08.398498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.772 [2024-07-25 00:11:08.398526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.772 [2024-07-25 00:11:08.398547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.772 [2024-07-25 00:11:08.398561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.398575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.398594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.773 [2024-07-25 00:11:08.408314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.408626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.408656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.408672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.408694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.408715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.408729] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.408742] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.408761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 [2024-07-25 00:11:08.418404] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.418629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.418659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.418676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.418699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.418721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.418735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.418749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.418770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 [2024-07-25 00:11:08.428481] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.428701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.428732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.428755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.428780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.428802] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.428817] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.428831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.428851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:02.773 [2024-07-25 00:11:08.438558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:02.773 [2024-07-25 00:11:08.440734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.773 [2024-07-25 00:11:08.440771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.440791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:02.773 [2024-07-25 00:11:08.440817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.440875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.773 [2024-07-25 00:11:08.440898] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.440914] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.440937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:02.773 [2024-07-25 00:11:08.448637] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.448866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.448899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.448917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.448947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.448987] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.449007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.449022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.449043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.773 [2024-07-25 00:11:08.458718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.458898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.458930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.458948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.458972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.458995] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.459010] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.459025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.459060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 [2024-07-25 00:11:08.468795] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.773 [2024-07-25 00:11:08.468971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.773 [2024-07-25 00:11:08.469001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.773 [2024-07-25 00:11:08.469019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.773 [2024-07-25 00:11:08.469043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.773 [2024-07-25 00:11:08.469081] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.773 [2024-07-25 00:11:08.469111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.773 [2024-07-25 00:11:08.469129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.773 [2024-07-25 00:11:08.469167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.773 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:02.774 00:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:02.774 [2024-07-25 00:11:08.478873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:02.774 [2024-07-25 00:11:08.479093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.774 [2024-07-25 00:11:08.479152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8da0 with addr=10.0.0.2, port=4420 00:31:02.774 [2024-07-25 00:11:08.479169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8da0 is same with the state(5) to be set 00:31:02.774 [2024-07-25 00:11:08.479191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8da0 (9): Bad file descriptor 00:31:02.774 [2024-07-25 00:11:08.479230] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.774 [2024-07-25 00:11:08.479248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.774 [2024-07-25 00:11:08.479261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.774 [2024-07-25 00:11:08.479281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.774 [2024-07-25 00:11:08.482229] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:02.774 [2024-07-25 00:11:08.482260] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.148 00:11:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.082 [2024-07-25 00:11:10.716506] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:05.082 [2024-07-25 00:11:10.716544] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:05.082 [2024-07-25 00:11:10.716567] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.340 [2024-07-25 00:11:10.803827] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:05.598 [2024-07-25 00:11:11.071765] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:05.598 [2024-07-25 00:11:11.071822] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 request: 00:31:05.599 { 00:31:05.599 "name": "nvme", 00:31:05.599 "trtype": "tcp", 00:31:05.599 "traddr": "10.0.0.2", 00:31:05.599 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:05.599 "adrfam": "ipv4", 00:31:05.599 "trsvcid": "8009", 00:31:05.599 "wait_for_attach": true, 00:31:05.599 "method": "bdev_nvme_start_discovery", 00:31:05.599 "req_id": 1 00:31:05.599 } 00:31:05.599 Got JSON-RPC error response 00:31:05.599 response: 00:31:05.599 { 00:31:05.599 "code": -17, 00:31:05.599 "message": "File exists" 00:31:05.599 } 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 request: 00:31:05.599 { 00:31:05.599 "name": "nvme_second", 00:31:05.599 "trtype": "tcp", 00:31:05.599 "traddr": "10.0.0.2", 00:31:05.599 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:05.599 "adrfam": "ipv4", 00:31:05.599 "trsvcid": "8009", 00:31:05.599 "wait_for_attach": true, 00:31:05.599 "method": "bdev_nvme_start_discovery", 00:31:05.599 "req_id": 1 00:31:05.599 } 00:31:05.599 Got JSON-RPC error response 00:31:05.599 response: 00:31:05.599 { 00:31:05.599 "code": -17, 00:31:05.599 "message": "File exists" 00:31:05.599 } 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.599 00:11:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 [2024-07-25 00:11:12.288462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.973 [2024-07-25 00:11:12.288521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1a5a0 with addr=10.0.0.2, port=8010 00:31:06.973 [2024-07-25 00:11:12.288558] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:06.973 [2024-07-25 00:11:12.288575] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:06.973 [2024-07-25 00:11:12.288590] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:07.906 [2024-07-25 00:11:13.290826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.906 [2024-07-25 00:11:13.290890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d262e0 with addr=10.0.0.2, port=8010 00:31:07.906 [2024-07-25 00:11:13.290925] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:07.906 [2024-07-25 00:11:13.290941] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:07.906 [2024-07-25 00:11:13.290955] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:08.838 [2024-07-25 00:11:14.292977] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:08.838 request: 00:31:08.838 { 00:31:08.838 "name": "nvme_second", 00:31:08.838 "trtype": "tcp", 00:31:08.838 "traddr": "10.0.0.2", 00:31:08.838 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:08.838 "adrfam": "ipv4", 00:31:08.838 "trsvcid": "8010", 00:31:08.838 "attach_timeout_ms": 3000, 00:31:08.838 "method": "bdev_nvme_start_discovery", 00:31:08.838 "req_id": 1 00:31:08.838 } 00:31:08.838 Got JSON-RPC error response 00:31:08.838 response: 00:31:08.838 { 00:31:08.838 "code": -110, 00:31:08.838 "message": "Connection timed out" 00:31:08.838 } 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 907392 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.838 rmmod nvme_tcp 00:31:08.838 rmmod nvme_fabrics 00:31:08.838 rmmod nvme_keyring 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 907295 ']' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 907295 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 907295 ']' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 907295 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 907295 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 907295' 00:31:08.838 killing process with pid 907295 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 907295 00:31:08.838 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 907295 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.096 00:11:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.627 00:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.627 00:31:11.627 real 0m14.121s 00:31:11.627 user 0m21.140s 00:31:11.627 sys 0m2.767s 00:31:11.627 00:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:11.627 00:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.627 ************************************ 00:31:11.627 END TEST nvmf_host_discovery 00:31:11.627 ************************************ 00:31:11.627 00:11:16 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:11.627 00:11:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:11.627 00:11:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.627 00:11:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.627 ************************************ 00:31:11.627 START TEST nvmf_host_multipath_status 00:31:11.627 ************************************ 00:31:11.627 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:11.627 * Looking for test storage... 00:31:11.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.628 00:11:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.002 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:13.003 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:13.003 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.003 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:13.261 Found net devices under 0000:09:00.0: cvl_0_0 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:13.261 Found net devices under 0000:09:00.1: cvl_0_1 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.261 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:31:13.262 00:31:13.262 --- 10.0.0.2 ping statistics --- 00:31:13.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.262 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:31:13.262 00:31:13.262 --- 10.0.0.1 ping statistics --- 00:31:13.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.262 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=910603 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 910603 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 910603 ']' 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.262 00:11:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:13.262 [2024-07-25 00:11:18.941579] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:13.262 [2024-07-25 00:11:18.941652] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.262 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.520 [2024-07-25 00:11:19.003895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:13.520 [2024-07-25 00:11:19.096130] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.520 [2024-07-25 00:11:19.096183] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.520 [2024-07-25 00:11:19.096211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.520 [2024-07-25 00:11:19.096223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.520 [2024-07-25 00:11:19.096233] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.520 [2024-07-25 00:11:19.096296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.520 [2024-07-25 00:11:19.096301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.520 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.520 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:13.520 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:13.520 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.520 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:13.778 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.778 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=910603 00:31:13.778 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:14.036 [2024-07-25 00:11:19.512035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.036 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:14.295 Malloc0 00:31:14.295 00:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:14.553 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.811 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.068 [2024-07-25 00:11:20.573211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.068 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.326 [2024-07-25 00:11:20.858018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.326 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=910784 00:31:15.326 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:15.326 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 910784 /var/tmp/bdevperf.sock 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 910784 ']' 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.327 00:11:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.585 00:11:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:15.585 00:11:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:15.585 00:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:15.842 00:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:16.432 Nvme0n1 00:31:16.432 00:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:16.691 Nvme0n1 00:31:16.691 00:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:16.691 00:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:19.218 00:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:19.218 00:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:19.218 00:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:19.218 00:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:20.592 00:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:20.592 00:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:20.592 00:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.592 00:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.592 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.592 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:20.592 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.592 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.850 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.850 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.850 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.850 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.107 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.108 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.108 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.108 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.366 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.366 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:21.366 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.366 00:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.624 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.624 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.624 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.624 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.881 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.881 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:21.881 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:22.139 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:22.397 00:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:23.331 00:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:23.331 00:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:23.331 00:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.331 00:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.589 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.589 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:23.589 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.589 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:23.847 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.847 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:23.847 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.847 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.105 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.105 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.105 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.105 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.363 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.363 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:24.363 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.363 00:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.620 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.620 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:24.620 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.620 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.878 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.878 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:24.878 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:25.136 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:25.394 00:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:26.326 00:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:26.326 00:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:26.326 00:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.326 00:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.583 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.583 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:26.583 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.583 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.840 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.840 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.840 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.840 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.097 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.097 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.097 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.097 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:27.355 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.355 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:27.355 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.355 00:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.613 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.613 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:27.613 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.613 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.870 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.870 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:27.870 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:28.127 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:28.384 00:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:29.315 00:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:29.315 00:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:29.315 00:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.315 00:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:29.572 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.572 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:29.572 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.572 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.830 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.830 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.830 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.830 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.087 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.087 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.087 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.087 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:30.344 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.344 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:30.345 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.345 00:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:30.604 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.604 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:30.604 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.605 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.863 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.863 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:30.863 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:31.119 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:31.376 00:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:32.307 00:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:32.307 00:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:32.307 00:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.307 00:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:32.564 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.564 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:32.564 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.564 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.821 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.821 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.821 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.821 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.079 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.079 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.079 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.079 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.337 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.337 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:33.337 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.337 00:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.594 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.595 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:33.595 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.595 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.852 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.852 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:33.852 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:34.111 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:34.403 00:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:35.337 00:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:35.337 00:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:35.337 00:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.337 00:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.595 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.595 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:35.595 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.595 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.853 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.853 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.853 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.853 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.111 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.111 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.111 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.111 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.369 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.369 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:36.369 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.369 00:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.627 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.627 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:36.627 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.627 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.887 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.887 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:37.146 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:37.146 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:37.404 00:11:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:37.663 00:11:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:38.594 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:38.594 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:38.594 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.594 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.852 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.852 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:38.852 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.852 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:39.110 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.110 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:39.110 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.110 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:39.368 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.368 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:39.368 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.368 00:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:39.626 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.626 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:39.626 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.626 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:39.884 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.884 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:39.884 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.884 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:40.142 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.142 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:40.142 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:40.400 00:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:40.658 00:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:41.590 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:41.591 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:41.591 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.591 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:41.849 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.849 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:41.849 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.849 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:42.107 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.107 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:42.107 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.107 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:42.365 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.365 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:42.365 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.365 00:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:42.623 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.623 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:42.623 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.623 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:42.881 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.881 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:42.881 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.881 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:43.140 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.140 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:43.140 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:43.398 00:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:43.656 00:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:44.590 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:44.590 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:44.590 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.591 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.848 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.848 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:44.848 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.848 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.105 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.105 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.106 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.106 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.363 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.363 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.363 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.363 00:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.620 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.620 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.620 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.620 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.877 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.877 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:45.877 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.877 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.134 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.134 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:46.134 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.392 00:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:46.650 00:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:47.582 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:47.582 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:47.582 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.582 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.839 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.839 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:47.839 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.839 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.094 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.095 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.095 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.095 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.351 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.351 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.351 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.351 00:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.608 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.608 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.608 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.608 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.866 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.866 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:48.866 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.866 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 910784 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 910784 ']' 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 910784 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 910784 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 910784' 00:31:49.123 killing process with pid 910784 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 910784 00:31:49.123 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 910784 00:31:49.384 Connection closed with partial response: 00:31:49.384 00:31:49.384 00:31:49.384 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 910784 00:31:49.384 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:49.384 [2024-07-25 00:11:20.917295] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:49.384 [2024-07-25 00:11:20.917382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910784 ] 00:31:49.384 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.384 [2024-07-25 00:11:20.980155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.384 [2024-07-25 00:11:21.071279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.384 Running I/O for 90 seconds... 00:31:49.384 [2024-07-25 00:11:36.647757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.647822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.647888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.647910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.647935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.647953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.647976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.647993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.384 [2024-07-25 00:11:36.648651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:49.384 [2024-07-25 00:11:36.648672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.648688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.648946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.385 [2024-07-25 00:11:36.649254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.649965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.649989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:49.385 [2024-07-25 00:11:36.650372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.385 [2024-07-25 00:11:36.650388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.650980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.650996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.386 [2024-07-25 00:11:36.651962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.651990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.386 [2024-07-25 00:11:36.652007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:49.386 [2024-07-25 00:11:36.652037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.652662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.652973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.387 [2024-07-25 00:11:36.652989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.387 [2024-07-25 00:11:36.653639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:49.387 [2024-07-25 00:11:36.653667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:36.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:36.653713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:36.653730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.153711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.153774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.153833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.153854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.153879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.153897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.153958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.153974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.154023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.154079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.154131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.154169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.154228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.154244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.388 [2024-07-25 00:11:52.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.156764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.156780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:49.388 [2024-07-25 00:11:52.157430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.388 [2024-07-25 00:11:52.157446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:49.389 [2024-07-25 00:11:52.157488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.157966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.157982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.158003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.158018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.158038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.158053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.158074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.158090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:49.389 [2024-07-25 00:11:52.158135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.389 [2024-07-25 00:11:52.158165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:49.389 Received shutdown signal, test time was about 32.239054 seconds 00:31:49.389 00:31:49.389 Latency(us) 00:31:49.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.389 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:49.389 Verification LBA range: start 0x0 length 0x4000 00:31:49.389 Nvme0n1 : 32.24 7961.86 31.10 0.00 0.00 16050.40 1025.52 4026531.84 00:31:49.389 =================================================================================================================== 00:31:49.389 Total : 7961.86 31.10 0.00 0.00 16050.40 1025.52 4026531.84 00:31:49.389 00:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:49.647 rmmod nvme_tcp 00:31:49.647 rmmod nvme_fabrics 00:31:49.647 rmmod nvme_keyring 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 910603 ']' 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 910603 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 910603 ']' 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 910603 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 910603 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 910603' 00:31:49.647 killing process with pid 910603 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 910603 00:31:49.647 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 910603 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:49.904 00:11:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.461 00:11:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:52.461 00:31:52.461 real 0m40.817s 00:31:52.461 user 2m3.368s 00:31:52.461 sys 0m10.256s 00:31:52.462 00:11:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:52.462 00:11:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:52.462 ************************************ 00:31:52.462 END TEST nvmf_host_multipath_status 00:31:52.462 ************************************ 00:31:52.462 00:11:57 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:52.462 00:11:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:52.462 00:11:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:52.462 00:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:52.462 ************************************ 00:31:52.462 START TEST nvmf_discovery_remove_ifc 00:31:52.462 ************************************ 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:52.462 * Looking for test storage... 00:31:52.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:52.462 00:11:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:54.361 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.361 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:54.361 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:54.362 Found net devices under 0000:09:00.0: cvl_0_0 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:54.362 Found net devices under 0000:09:00.1: cvl_0_1 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:31:54.362 00:31:54.362 --- 10.0.0.2 ping statistics --- 00:31:54.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.362 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:31:54.362 00:31:54.362 --- 10.0.0.1 ping statistics --- 00:31:54.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.362 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=916973 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 916973 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 916973 ']' 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:54.362 00:11:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 [2024-07-25 00:11:59.847659] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:54.362 [2024-07-25 00:11:59.847743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.362 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.362 [2024-07-25 00:11:59.909223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.362 [2024-07-25 00:11:59.996829] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.362 [2024-07-25 00:11:59.996889] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.362 [2024-07-25 00:11:59.996903] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.362 [2024-07-25 00:11:59.996914] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.362 [2024-07-25 00:11:59.996923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.362 [2024-07-25 00:11:59.996962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:54.620 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.621 [2024-07-25 00:12:00.148530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.621 [2024-07-25 00:12:00.156690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:54.621 null0 00:31:54.621 [2024-07-25 00:12:00.188631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=917016 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 917016 /tmp/host.sock 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 917016 ']' 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:54.621 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:54.621 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.621 [2024-07-25 00:12:00.257010] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:54.621 [2024-07-25 00:12:00.257114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917016 ] 00:31:54.621 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.621 [2024-07-25 00:12:00.317141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.879 [2024-07-25 00:12:00.408603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.879 00:12:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.251 [2024-07-25 00:12:01.637111] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:56.251 [2024-07-25 00:12:01.637139] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:56.251 [2024-07-25 00:12:01.637162] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:56.251 [2024-07-25 00:12:01.764593] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:56.510 [2024-07-25 00:12:01.988700] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:56.510 [2024-07-25 00:12:01.988770] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:56.510 [2024-07-25 00:12:01.988815] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:56.510 [2024-07-25 00:12:01.988841] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:56.510 [2024-07-25 00:12:01.988872] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.510 00:12:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.510 [2024-07-25 00:12:01.995233] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x786900 was disconnected and freed. delete nvme_qpair. 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:56.510 00:12:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.443 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.443 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.444 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.701 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:57.701 00:12:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.631 00:12:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.568 00:12:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.940 00:12:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.874 00:12:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.874 [2024-07-25 00:12:07.429957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:01.874 [2024-07-25 00:12:07.430029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.874 [2024-07-25 00:12:07.430055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.874 [2024-07-25 00:12:07.430074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.874 [2024-07-25 00:12:07.430089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.874 [2024-07-25 00:12:07.430112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.874 [2024-07-25 00:12:07.430130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.874 [2024-07-25 00:12:07.430169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.874 [2024-07-25 00:12:07.430182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.874 [2024-07-25 00:12:07.430196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.874 [2024-07-25 00:12:07.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.874 [2024-07-25 00:12:07.430220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d990 is same with the state(5) to be set 00:32:01.874 [2024-07-25 00:12:07.439975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d990 (9): Bad file descriptor 00:32:01.874 [2024-07-25 00:12:07.450024] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.808 [2024-07-25 00:12:08.466152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:02.808 [2024-07-25 00:12:08.466216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74d990 with addr=10.0.0.2, port=4420 00:32:02.808 [2024-07-25 00:12:08.466244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d990 is same with the state(5) to be set 00:32:02.808 [2024-07-25 00:12:08.466295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d990 (9): Bad file descriptor 00:32:02.808 [2024-07-25 00:12:08.466790] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:02.808 [2024-07-25 00:12:08.466825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.808 [2024-07-25 00:12:08.466842] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.808 [2024-07-25 00:12:08.466860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.808 [2024-07-25 00:12:08.466892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.808 [2024-07-25 00:12:08.466911] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.808 00:12:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.181 [2024-07-25 00:12:09.469409] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:04.181 [2024-07-25 00:12:09.469452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:04.181 [2024-07-25 00:12:09.469468] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:04.181 [2024-07-25 00:12:09.469484] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:04.181 [2024-07-25 00:12:09.469508] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.181 [2024-07-25 00:12:09.469553] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:04.181 [2024-07-25 00:12:09.469596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.181 [2024-07-25 00:12:09.469620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.181 [2024-07-25 00:12:09.469641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.182 [2024-07-25 00:12:09.469656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.182 [2024-07-25 00:12:09.469671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.182 [2024-07-25 00:12:09.469693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.182 [2024-07-25 00:12:09.469709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.182 [2024-07-25 00:12:09.469723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.182 [2024-07-25 00:12:09.469738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.182 [2024-07-25 00:12:09.469752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.182 [2024-07-25 00:12:09.469767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:04.182 [2024-07-25 00:12:09.469985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74cde0 (9): Bad file descriptor 00:32:04.182 [2024-07-25 00:12:09.471007] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:04.182 [2024-07-25 00:12:09.471032] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:04.182 00:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:05.115 00:12:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.046 [2024-07-25 00:12:11.522970] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:06.046 [2024-07-25 00:12:11.523015] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:06.046 [2024-07-25 00:12:11.523041] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:06.046 [2024-07-25 00:12:11.650477] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:06.046 00:12:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.305 [2024-07-25 00:12:11.874023] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:06.305 [2024-07-25 00:12:11.874079] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:06.305 [2024-07-25 00:12:11.874125] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:06.305 [2024-07-25 00:12:11.874166] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:06.305 [2024-07-25 00:12:11.874179] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:06.305 [2024-07-25 00:12:11.881498] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x767f60 was disconnected and freed. delete nvme_qpair. 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 917016 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 917016 ']' 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 917016 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 917016 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 917016' 00:32:07.239 killing process with pid 917016 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 917016 00:32:07.239 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 917016 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.497 00:12:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.497 rmmod nvme_tcp 00:32:07.497 rmmod nvme_fabrics 00:32:07.497 rmmod nvme_keyring 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 916973 ']' 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 916973 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 916973 ']' 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 916973 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 916973 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 916973' 00:32:07.497 killing process with pid 916973 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 916973 00:32:07.497 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 916973 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.755 00:12:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.655 00:12:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.655 00:32:09.655 real 0m17.701s 00:32:09.655 user 0m25.754s 00:32:09.655 sys 0m3.004s 00:32:09.655 00:12:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:09.655 00:12:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.655 ************************************ 00:32:09.655 END TEST nvmf_discovery_remove_ifc 00:32:09.655 ************************************ 00:32:09.913 00:12:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:09.913 00:12:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:09.913 00:12:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:09.913 00:12:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.913 ************************************ 00:32:09.913 START TEST nvmf_identify_kernel_target 00:32:09.913 ************************************ 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:09.913 * Looking for test storage... 00:32:09.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:09.913 00:12:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:11.815 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:11.815 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:11.815 Found net devices under 0000:09:00.0: cvl_0_0 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:11.815 Found net devices under 0000:09:00.1: cvl_0_1 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.815 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:11.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:32:11.816 00:32:11.816 --- 10.0.0.2 ping statistics --- 00:32:11.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.816 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:32:11.816 00:32:11.816 --- 10.0.0.1 ping statistics --- 00:32:11.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.816 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:11.816 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:12.108 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:12.108 00:12:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:13.049 Waiting for block devices as requested 00:32:13.049 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:13.049 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:13.308 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:13.308 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:13.308 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:13.308 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:13.566 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:13.566 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:13.566 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:13.824 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:13.824 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:13.824 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:14.081 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:14.081 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:14.081 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:14.081 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:14.339 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:14.339 00:12:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:14.339 No valid GPT data, bailing 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:14.339 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:14.598 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:14.598 00:32:14.598 Discovery Log Number of Records 2, Generation counter 2 00:32:14.598 =====Discovery Log Entry 0====== 00:32:14.598 trtype: tcp 00:32:14.598 adrfam: ipv4 00:32:14.598 subtype: current discovery subsystem 00:32:14.598 treq: not specified, sq flow control disable supported 00:32:14.598 portid: 1 00:32:14.598 trsvcid: 4420 00:32:14.598 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:14.598 traddr: 10.0.0.1 00:32:14.598 eflags: none 00:32:14.598 sectype: none 00:32:14.598 =====Discovery Log Entry 1====== 00:32:14.598 trtype: tcp 00:32:14.598 adrfam: ipv4 00:32:14.598 subtype: nvme subsystem 00:32:14.598 treq: not specified, sq flow control disable supported 00:32:14.598 portid: 1 00:32:14.598 trsvcid: 4420 00:32:14.598 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:14.598 traddr: 10.0.0.1 00:32:14.598 eflags: none 00:32:14.598 sectype: none 00:32:14.598 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:14.598 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:14.598 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.598 ===================================================== 00:32:14.598 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:14.598 ===================================================== 00:32:14.598 Controller Capabilities/Features 00:32:14.598 ================================ 00:32:14.598 Vendor ID: 0000 00:32:14.598 Subsystem Vendor ID: 0000 00:32:14.598 Serial Number: bbdb453e90eff52c516f 00:32:14.598 Model Number: Linux 00:32:14.598 Firmware Version: 6.7.0-68 00:32:14.598 Recommended Arb Burst: 0 00:32:14.598 IEEE OUI Identifier: 00 00 00 00:32:14.598 Multi-path I/O 00:32:14.598 May have multiple subsystem ports: No 00:32:14.598 May have multiple controllers: No 00:32:14.598 Associated with SR-IOV VF: No 00:32:14.598 Max Data Transfer Size: Unlimited 00:32:14.598 Max Number of Namespaces: 0 00:32:14.598 Max Number of I/O Queues: 1024 00:32:14.598 NVMe Specification Version (VS): 1.3 00:32:14.598 NVMe Specification Version (Identify): 1.3 00:32:14.598 Maximum Queue Entries: 1024 00:32:14.598 Contiguous Queues Required: No 00:32:14.598 Arbitration Mechanisms Supported 00:32:14.598 Weighted Round Robin: Not Supported 00:32:14.598 Vendor Specific: Not Supported 00:32:14.598 Reset Timeout: 7500 ms 00:32:14.598 Doorbell Stride: 4 bytes 00:32:14.598 NVM Subsystem Reset: Not Supported 00:32:14.598 Command Sets Supported 00:32:14.598 NVM Command Set: Supported 00:32:14.598 Boot Partition: Not Supported 00:32:14.598 Memory Page Size Minimum: 4096 bytes 00:32:14.598 Memory Page Size Maximum: 4096 bytes 00:32:14.598 Persistent Memory Region: Not Supported 00:32:14.598 Optional Asynchronous Events Supported 00:32:14.598 Namespace Attribute Notices: Not Supported 00:32:14.598 Firmware Activation Notices: Not Supported 00:32:14.598 ANA Change Notices: Not Supported 00:32:14.598 PLE Aggregate Log Change Notices: Not Supported 00:32:14.598 LBA Status Info Alert Notices: Not Supported 00:32:14.598 EGE Aggregate Log Change Notices: Not Supported 00:32:14.598 Normal NVM Subsystem Shutdown event: Not Supported 00:32:14.598 Zone Descriptor Change Notices: Not Supported 00:32:14.598 Discovery Log Change Notices: Supported 00:32:14.598 Controller Attributes 00:32:14.598 128-bit Host Identifier: Not Supported 00:32:14.598 Non-Operational Permissive Mode: Not Supported 00:32:14.598 NVM Sets: Not Supported 00:32:14.598 Read Recovery Levels: Not Supported 00:32:14.598 Endurance Groups: Not Supported 00:32:14.598 Predictable Latency Mode: Not Supported 00:32:14.599 Traffic Based Keep ALive: Not Supported 00:32:14.599 Namespace Granularity: Not Supported 00:32:14.599 SQ Associations: Not Supported 00:32:14.599 UUID List: Not Supported 00:32:14.599 Multi-Domain Subsystem: Not Supported 00:32:14.599 Fixed Capacity Management: Not Supported 00:32:14.599 Variable Capacity Management: Not Supported 00:32:14.599 Delete Endurance Group: Not Supported 00:32:14.599 Delete NVM Set: Not Supported 00:32:14.599 Extended LBA Formats Supported: Not Supported 00:32:14.599 Flexible Data Placement Supported: Not Supported 00:32:14.599 00:32:14.599 Controller Memory Buffer Support 00:32:14.599 ================================ 00:32:14.599 Supported: No 00:32:14.599 00:32:14.599 Persistent Memory Region Support 00:32:14.599 ================================ 00:32:14.599 Supported: No 00:32:14.599 00:32:14.599 Admin Command Set Attributes 00:32:14.599 ============================ 00:32:14.599 Security Send/Receive: Not Supported 00:32:14.599 Format NVM: Not Supported 00:32:14.599 Firmware Activate/Download: Not Supported 00:32:14.599 Namespace Management: Not Supported 00:32:14.599 Device Self-Test: Not Supported 00:32:14.599 Directives: Not Supported 00:32:14.599 NVMe-MI: Not Supported 00:32:14.599 Virtualization Management: Not Supported 00:32:14.599 Doorbell Buffer Config: Not Supported 00:32:14.599 Get LBA Status Capability: Not Supported 00:32:14.599 Command & Feature Lockdown Capability: Not Supported 00:32:14.599 Abort Command Limit: 1 00:32:14.599 Async Event Request Limit: 1 00:32:14.599 Number of Firmware Slots: N/A 00:32:14.599 Firmware Slot 1 Read-Only: N/A 00:32:14.599 Firmware Activation Without Reset: N/A 00:32:14.599 Multiple Update Detection Support: N/A 00:32:14.599 Firmware Update Granularity: No Information Provided 00:32:14.599 Per-Namespace SMART Log: No 00:32:14.599 Asymmetric Namespace Access Log Page: Not Supported 00:32:14.599 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:14.599 Command Effects Log Page: Not Supported 00:32:14.599 Get Log Page Extended Data: Supported 00:32:14.599 Telemetry Log Pages: Not Supported 00:32:14.599 Persistent Event Log Pages: Not Supported 00:32:14.599 Supported Log Pages Log Page: May Support 00:32:14.599 Commands Supported & Effects Log Page: Not Supported 00:32:14.599 Feature Identifiers & Effects Log Page:May Support 00:32:14.599 NVMe-MI Commands & Effects Log Page: May Support 00:32:14.599 Data Area 4 for Telemetry Log: Not Supported 00:32:14.599 Error Log Page Entries Supported: 1 00:32:14.599 Keep Alive: Not Supported 00:32:14.599 00:32:14.599 NVM Command Set Attributes 00:32:14.599 ========================== 00:32:14.599 Submission Queue Entry Size 00:32:14.599 Max: 1 00:32:14.599 Min: 1 00:32:14.599 Completion Queue Entry Size 00:32:14.599 Max: 1 00:32:14.599 Min: 1 00:32:14.599 Number of Namespaces: 0 00:32:14.599 Compare Command: Not Supported 00:32:14.599 Write Uncorrectable Command: Not Supported 00:32:14.599 Dataset Management Command: Not Supported 00:32:14.599 Write Zeroes Command: Not Supported 00:32:14.599 Set Features Save Field: Not Supported 00:32:14.599 Reservations: Not Supported 00:32:14.599 Timestamp: Not Supported 00:32:14.599 Copy: Not Supported 00:32:14.599 Volatile Write Cache: Not Present 00:32:14.599 Atomic Write Unit (Normal): 1 00:32:14.599 Atomic Write Unit (PFail): 1 00:32:14.599 Atomic Compare & Write Unit: 1 00:32:14.599 Fused Compare & Write: Not Supported 00:32:14.599 Scatter-Gather List 00:32:14.599 SGL Command Set: Supported 00:32:14.599 SGL Keyed: Not Supported 00:32:14.599 SGL Bit Bucket Descriptor: Not Supported 00:32:14.599 SGL Metadata Pointer: Not Supported 00:32:14.599 Oversized SGL: Not Supported 00:32:14.599 SGL Metadata Address: Not Supported 00:32:14.599 SGL Offset: Supported 00:32:14.599 Transport SGL Data Block: Not Supported 00:32:14.599 Replay Protected Memory Block: Not Supported 00:32:14.599 00:32:14.599 Firmware Slot Information 00:32:14.599 ========================= 00:32:14.599 Active slot: 0 00:32:14.599 00:32:14.599 00:32:14.599 Error Log 00:32:14.599 ========= 00:32:14.599 00:32:14.599 Active Namespaces 00:32:14.599 ================= 00:32:14.599 Discovery Log Page 00:32:14.599 ================== 00:32:14.599 Generation Counter: 2 00:32:14.599 Number of Records: 2 00:32:14.599 Record Format: 0 00:32:14.599 00:32:14.599 Discovery Log Entry 0 00:32:14.599 ---------------------- 00:32:14.599 Transport Type: 3 (TCP) 00:32:14.599 Address Family: 1 (IPv4) 00:32:14.599 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:14.599 Entry Flags: 00:32:14.599 Duplicate Returned Information: 0 00:32:14.599 Explicit Persistent Connection Support for Discovery: 0 00:32:14.599 Transport Requirements: 00:32:14.599 Secure Channel: Not Specified 00:32:14.599 Port ID: 1 (0x0001) 00:32:14.599 Controller ID: 65535 (0xffff) 00:32:14.599 Admin Max SQ Size: 32 00:32:14.599 Transport Service Identifier: 4420 00:32:14.599 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:14.599 Transport Address: 10.0.0.1 00:32:14.599 Discovery Log Entry 1 00:32:14.599 ---------------------- 00:32:14.599 Transport Type: 3 (TCP) 00:32:14.599 Address Family: 1 (IPv4) 00:32:14.599 Subsystem Type: 2 (NVM Subsystem) 00:32:14.600 Entry Flags: 00:32:14.600 Duplicate Returned Information: 0 00:32:14.600 Explicit Persistent Connection Support for Discovery: 0 00:32:14.600 Transport Requirements: 00:32:14.600 Secure Channel: Not Specified 00:32:14.600 Port ID: 1 (0x0001) 00:32:14.600 Controller ID: 65535 (0xffff) 00:32:14.600 Admin Max SQ Size: 32 00:32:14.600 Transport Service Identifier: 4420 00:32:14.600 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:14.600 Transport Address: 10.0.0.1 00:32:14.600 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.600 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.858 get_feature(0x01) failed 00:32:14.858 get_feature(0x02) failed 00:32:14.858 get_feature(0x04) failed 00:32:14.858 ===================================================== 00:32:14.858 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.858 ===================================================== 00:32:14.858 Controller Capabilities/Features 00:32:14.858 ================================ 00:32:14.858 Vendor ID: 0000 00:32:14.858 Subsystem Vendor ID: 0000 00:32:14.858 Serial Number: 27fdcdcad3531b70b291 00:32:14.858 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:14.858 Firmware Version: 6.7.0-68 00:32:14.858 Recommended Arb Burst: 6 00:32:14.858 IEEE OUI Identifier: 00 00 00 00:32:14.858 Multi-path I/O 00:32:14.858 May have multiple subsystem ports: Yes 00:32:14.858 May have multiple controllers: Yes 00:32:14.858 Associated with SR-IOV VF: No 00:32:14.858 Max Data Transfer Size: Unlimited 00:32:14.858 Max Number of Namespaces: 1024 00:32:14.858 Max Number of I/O Queues: 128 00:32:14.858 NVMe Specification Version (VS): 1.3 00:32:14.858 NVMe Specification Version (Identify): 1.3 00:32:14.858 Maximum Queue Entries: 1024 00:32:14.858 Contiguous Queues Required: No 00:32:14.858 Arbitration Mechanisms Supported 00:32:14.858 Weighted Round Robin: Not Supported 00:32:14.858 Vendor Specific: Not Supported 00:32:14.858 Reset Timeout: 7500 ms 00:32:14.858 Doorbell Stride: 4 bytes 00:32:14.858 NVM Subsystem Reset: Not Supported 00:32:14.858 Command Sets Supported 00:32:14.858 NVM Command Set: Supported 00:32:14.858 Boot Partition: Not Supported 00:32:14.858 Memory Page Size Minimum: 4096 bytes 00:32:14.858 Memory Page Size Maximum: 4096 bytes 00:32:14.858 Persistent Memory Region: Not Supported 00:32:14.858 Optional Asynchronous Events Supported 00:32:14.858 Namespace Attribute Notices: Supported 00:32:14.858 Firmware Activation Notices: Not Supported 00:32:14.858 ANA Change Notices: Supported 00:32:14.858 PLE Aggregate Log Change Notices: Not Supported 00:32:14.859 LBA Status Info Alert Notices: Not Supported 00:32:14.859 EGE Aggregate Log Change Notices: Not Supported 00:32:14.859 Normal NVM Subsystem Shutdown event: Not Supported 00:32:14.859 Zone Descriptor Change Notices: Not Supported 00:32:14.859 Discovery Log Change Notices: Not Supported 00:32:14.859 Controller Attributes 00:32:14.859 128-bit Host Identifier: Supported 00:32:14.859 Non-Operational Permissive Mode: Not Supported 00:32:14.859 NVM Sets: Not Supported 00:32:14.859 Read Recovery Levels: Not Supported 00:32:14.859 Endurance Groups: Not Supported 00:32:14.859 Predictable Latency Mode: Not Supported 00:32:14.859 Traffic Based Keep ALive: Supported 00:32:14.859 Namespace Granularity: Not Supported 00:32:14.859 SQ Associations: Not Supported 00:32:14.859 UUID List: Not Supported 00:32:14.859 Multi-Domain Subsystem: Not Supported 00:32:14.859 Fixed Capacity Management: Not Supported 00:32:14.859 Variable Capacity Management: Not Supported 00:32:14.859 Delete Endurance Group: Not Supported 00:32:14.859 Delete NVM Set: Not Supported 00:32:14.859 Extended LBA Formats Supported: Not Supported 00:32:14.859 Flexible Data Placement Supported: Not Supported 00:32:14.859 00:32:14.859 Controller Memory Buffer Support 00:32:14.859 ================================ 00:32:14.859 Supported: No 00:32:14.859 00:32:14.859 Persistent Memory Region Support 00:32:14.859 ================================ 00:32:14.859 Supported: No 00:32:14.859 00:32:14.859 Admin Command Set Attributes 00:32:14.859 ============================ 00:32:14.859 Security Send/Receive: Not Supported 00:32:14.859 Format NVM: Not Supported 00:32:14.859 Firmware Activate/Download: Not Supported 00:32:14.859 Namespace Management: Not Supported 00:32:14.859 Device Self-Test: Not Supported 00:32:14.859 Directives: Not Supported 00:32:14.859 NVMe-MI: Not Supported 00:32:14.859 Virtualization Management: Not Supported 00:32:14.859 Doorbell Buffer Config: Not Supported 00:32:14.859 Get LBA Status Capability: Not Supported 00:32:14.859 Command & Feature Lockdown Capability: Not Supported 00:32:14.859 Abort Command Limit: 4 00:32:14.859 Async Event Request Limit: 4 00:32:14.859 Number of Firmware Slots: N/A 00:32:14.859 Firmware Slot 1 Read-Only: N/A 00:32:14.859 Firmware Activation Without Reset: N/A 00:32:14.859 Multiple Update Detection Support: N/A 00:32:14.859 Firmware Update Granularity: No Information Provided 00:32:14.859 Per-Namespace SMART Log: Yes 00:32:14.859 Asymmetric Namespace Access Log Page: Supported 00:32:14.859 ANA Transition Time : 10 sec 00:32:14.859 00:32:14.859 Asymmetric Namespace Access Capabilities 00:32:14.859 ANA Optimized State : Supported 00:32:14.859 ANA Non-Optimized State : Supported 00:32:14.859 ANA Inaccessible State : Supported 00:32:14.859 ANA Persistent Loss State : Supported 00:32:14.859 ANA Change State : Supported 00:32:14.859 ANAGRPID is not changed : No 00:32:14.859 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:14.859 00:32:14.859 ANA Group Identifier Maximum : 128 00:32:14.859 Number of ANA Group Identifiers : 128 00:32:14.859 Max Number of Allowed Namespaces : 1024 00:32:14.859 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:14.859 Command Effects Log Page: Supported 00:32:14.859 Get Log Page Extended Data: Supported 00:32:14.859 Telemetry Log Pages: Not Supported 00:32:14.859 Persistent Event Log Pages: Not Supported 00:32:14.859 Supported Log Pages Log Page: May Support 00:32:14.859 Commands Supported & Effects Log Page: Not Supported 00:32:14.859 Feature Identifiers & Effects Log Page:May Support 00:32:14.859 NVMe-MI Commands & Effects Log Page: May Support 00:32:14.859 Data Area 4 for Telemetry Log: Not Supported 00:32:14.859 Error Log Page Entries Supported: 128 00:32:14.859 Keep Alive: Supported 00:32:14.859 Keep Alive Granularity: 1000 ms 00:32:14.859 00:32:14.859 NVM Command Set Attributes 00:32:14.859 ========================== 00:32:14.859 Submission Queue Entry Size 00:32:14.859 Max: 64 00:32:14.859 Min: 64 00:32:14.859 Completion Queue Entry Size 00:32:14.859 Max: 16 00:32:14.859 Min: 16 00:32:14.859 Number of Namespaces: 1024 00:32:14.859 Compare Command: Not Supported 00:32:14.859 Write Uncorrectable Command: Not Supported 00:32:14.859 Dataset Management Command: Supported 00:32:14.859 Write Zeroes Command: Supported 00:32:14.859 Set Features Save Field: Not Supported 00:32:14.859 Reservations: Not Supported 00:32:14.859 Timestamp: Not Supported 00:32:14.859 Copy: Not Supported 00:32:14.859 Volatile Write Cache: Present 00:32:14.859 Atomic Write Unit (Normal): 1 00:32:14.859 Atomic Write Unit (PFail): 1 00:32:14.859 Atomic Compare & Write Unit: 1 00:32:14.859 Fused Compare & Write: Not Supported 00:32:14.859 Scatter-Gather List 00:32:14.859 SGL Command Set: Supported 00:32:14.859 SGL Keyed: Not Supported 00:32:14.859 SGL Bit Bucket Descriptor: Not Supported 00:32:14.859 SGL Metadata Pointer: Not Supported 00:32:14.859 Oversized SGL: Not Supported 00:32:14.859 SGL Metadata Address: Not Supported 00:32:14.859 SGL Offset: Supported 00:32:14.859 Transport SGL Data Block: Not Supported 00:32:14.859 Replay Protected Memory Block: Not Supported 00:32:14.859 00:32:14.859 Firmware Slot Information 00:32:14.859 ========================= 00:32:14.859 Active slot: 0 00:32:14.859 00:32:14.859 Asymmetric Namespace Access 00:32:14.859 =========================== 00:32:14.859 Change Count : 0 00:32:14.859 Number of ANA Group Descriptors : 1 00:32:14.859 ANA Group Descriptor : 0 00:32:14.859 ANA Group ID : 1 00:32:14.859 Number of NSID Values : 1 00:32:14.859 Change Count : 0 00:32:14.859 ANA State : 1 00:32:14.859 Namespace Identifier : 1 00:32:14.859 00:32:14.859 Commands Supported and Effects 00:32:14.859 ============================== 00:32:14.859 Admin Commands 00:32:14.859 -------------- 00:32:14.859 Get Log Page (02h): Supported 00:32:14.859 Identify (06h): Supported 00:32:14.859 Abort (08h): Supported 00:32:14.859 Set Features (09h): Supported 00:32:14.859 Get Features (0Ah): Supported 00:32:14.859 Asynchronous Event Request (0Ch): Supported 00:32:14.859 Keep Alive (18h): Supported 00:32:14.859 I/O Commands 00:32:14.859 ------------ 00:32:14.859 Flush (00h): Supported 00:32:14.859 Write (01h): Supported LBA-Change 00:32:14.859 Read (02h): Supported 00:32:14.859 Write Zeroes (08h): Supported LBA-Change 00:32:14.859 Dataset Management (09h): Supported 00:32:14.859 00:32:14.859 Error Log 00:32:14.859 ========= 00:32:14.859 Entry: 0 00:32:14.859 Error Count: 0x3 00:32:14.859 Submission Queue Id: 0x0 00:32:14.859 Command Id: 0x5 00:32:14.859 Phase Bit: 0 00:32:14.859 Status Code: 0x2 00:32:14.859 Status Code Type: 0x0 00:32:14.859 Do Not Retry: 1 00:32:14.859 Error Location: 0x28 00:32:14.859 LBA: 0x0 00:32:14.859 Namespace: 0x0 00:32:14.859 Vendor Log Page: 0x0 00:32:14.859 ----------- 00:32:14.859 Entry: 1 00:32:14.859 Error Count: 0x2 00:32:14.859 Submission Queue Id: 0x0 00:32:14.859 Command Id: 0x5 00:32:14.859 Phase Bit: 0 00:32:14.859 Status Code: 0x2 00:32:14.859 Status Code Type: 0x0 00:32:14.859 Do Not Retry: 1 00:32:14.859 Error Location: 0x28 00:32:14.859 LBA: 0x0 00:32:14.859 Namespace: 0x0 00:32:14.859 Vendor Log Page: 0x0 00:32:14.859 ----------- 00:32:14.859 Entry: 2 00:32:14.859 Error Count: 0x1 00:32:14.859 Submission Queue Id: 0x0 00:32:14.859 Command Id: 0x4 00:32:14.860 Phase Bit: 0 00:32:14.860 Status Code: 0x2 00:32:14.860 Status Code Type: 0x0 00:32:14.860 Do Not Retry: 1 00:32:14.860 Error Location: 0x28 00:32:14.860 LBA: 0x0 00:32:14.860 Namespace: 0x0 00:32:14.860 Vendor Log Page: 0x0 00:32:14.860 00:32:14.860 Number of Queues 00:32:14.860 ================ 00:32:14.860 Number of I/O Submission Queues: 128 00:32:14.860 Number of I/O Completion Queues: 128 00:32:14.860 00:32:14.860 ZNS Specific Controller Data 00:32:14.860 ============================ 00:32:14.860 Zone Append Size Limit: 0 00:32:14.860 00:32:14.860 00:32:14.860 Active Namespaces 00:32:14.860 ================= 00:32:14.860 get_feature(0x05) failed 00:32:14.860 Namespace ID:1 00:32:14.860 Command Set Identifier: NVM (00h) 00:32:14.860 Deallocate: Supported 00:32:14.860 Deallocated/Unwritten Error: Not Supported 00:32:14.860 Deallocated Read Value: Unknown 00:32:14.860 Deallocate in Write Zeroes: Not Supported 00:32:14.860 Deallocated Guard Field: 0xFFFF 00:32:14.860 Flush: Supported 00:32:14.860 Reservation: Not Supported 00:32:14.860 Namespace Sharing Capabilities: Multiple Controllers 00:32:14.860 Size (in LBAs): 1953525168 (931GiB) 00:32:14.860 Capacity (in LBAs): 1953525168 (931GiB) 00:32:14.860 Utilization (in LBAs): 1953525168 (931GiB) 00:32:14.860 UUID: 5edafa9a-094d-41fb-8133-d00eb40785d1 00:32:14.860 Thin Provisioning: Not Supported 00:32:14.860 Per-NS Atomic Units: Yes 00:32:14.860 Atomic Boundary Size (Normal): 0 00:32:14.860 Atomic Boundary Size (PFail): 0 00:32:14.860 Atomic Boundary Offset: 0 00:32:14.860 NGUID/EUI64 Never Reused: No 00:32:14.860 ANA group ID: 1 00:32:14.860 Namespace Write Protected: No 00:32:14.860 Number of LBA Formats: 1 00:32:14.860 Current LBA Format: LBA Format #00 00:32:14.860 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:14.860 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:14.860 rmmod nvme_tcp 00:32:14.860 rmmod nvme_fabrics 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.860 00:12:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:16.762 00:12:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:18.135 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:18.135 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:18.135 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:19.070 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:19.070 00:32:19.070 real 0m9.351s 00:32:19.070 user 0m1.969s 00:32:19.070 sys 0m3.311s 00:32:19.070 00:12:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:19.070 00:12:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.070 ************************************ 00:32:19.070 END TEST nvmf_identify_kernel_target 00:32:19.070 ************************************ 00:32:19.070 00:12:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:19.070 00:12:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:19.070 00:12:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:19.070 00:12:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:19.328 ************************************ 00:32:19.328 START TEST nvmf_auth_host 00:32:19.328 ************************************ 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:19.328 * Looking for test storage... 00:32:19.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:19.328 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:19.329 00:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:21.227 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:21.227 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.227 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:21.228 Found net devices under 0000:09:00.0: cvl_0_0 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:21.228 Found net devices under 0000:09:00.1: cvl_0_1 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.228 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.485 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:21.485 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.485 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:21.485 00:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:21.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:32:21.486 00:32:21.486 --- 10.0.0.2 ping statistics --- 00:32:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.486 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:21.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:21.486 00:32:21.486 --- 10.0.0.1 ping statistics --- 00:32:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.486 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=924800 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 924800 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 924800 ']' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.486 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3fb049b49673aa7bd82603cf51ee6f3c 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Je5 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3fb049b49673aa7bd82603cf51ee6f3c 0 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3fb049b49673aa7bd82603cf51ee6f3c 0 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3fb049b49673aa7bd82603cf51ee6f3c 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:21.744 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Je5 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Je5 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Je5 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=193d905813e3292107be19a7a370d73cee905df0781d84f0de15748479627344 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lQw 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 193d905813e3292107be19a7a370d73cee905df0781d84f0de15748479627344 3 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 193d905813e3292107be19a7a370d73cee905df0781d84f0de15748479627344 3 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=193d905813e3292107be19a7a370d73cee905df0781d84f0de15748479627344 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lQw 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lQw 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lQw 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96cba30c9f0f95cfd104240e69c4014f379f4546c6f25486 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EEb 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96cba30c9f0f95cfd104240e69c4014f379f4546c6f25486 0 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96cba30c9f0f95cfd104240e69c4014f379f4546c6f25486 0 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96cba30c9f0f95cfd104240e69c4014f379f4546c6f25486 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EEb 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EEb 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EEb 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c76e429f5a4745548b2173d91af27dc760b96d1b0364746f 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1V6 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c76e429f5a4745548b2173d91af27dc760b96d1b0364746f 2 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c76e429f5a4745548b2173d91af27dc760b96d1b0364746f 2 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c76e429f5a4745548b2173d91af27dc760b96d1b0364746f 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1V6 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1V6 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1V6 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:22.002 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bc5ff4074dfda21a64c25ac4b19cc701 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BSN 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bc5ff4074dfda21a64c25ac4b19cc701 1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bc5ff4074dfda21a64c25ac4b19cc701 1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bc5ff4074dfda21a64c25ac4b19cc701 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BSN 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BSN 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BSN 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a812ac1c6e7be982fe65b06dce6663b0 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.puD 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a812ac1c6e7be982fe65b06dce6663b0 1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a812ac1c6e7be982fe65b06dce6663b0 1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a812ac1c6e7be982fe65b06dce6663b0 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:22.003 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.puD 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.puD 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.puD 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d17a9b7dc937c25581c1f7ce374296ebc4f6b3e45f13712e 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OE4 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d17a9b7dc937c25581c1f7ce374296ebc4f6b3e45f13712e 2 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d17a9b7dc937c25581c1f7ce374296ebc4f6b3e45f13712e 2 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d17a9b7dc937c25581c1f7ce374296ebc4f6b3e45f13712e 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OE4 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OE4 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OE4 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2defa660e90096561956095148ce348 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.g5d 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2defa660e90096561956095148ce348 0 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2defa660e90096561956095148ce348 0 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2defa660e90096561956095148ce348 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.g5d 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.g5d 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.g5d 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e7eafb27213613777ed30a4d95d458d7a298d2e4371681da2bdda65e8fa41fa 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Nw9 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e7eafb27213613777ed30a4d95d458d7a298d2e4371681da2bdda65e8fa41fa 3 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e7eafb27213613777ed30a4d95d458d7a298d2e4371681da2bdda65e8fa41fa 3 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e7eafb27213613777ed30a4d95d458d7a298d2e4371681da2bdda65e8fa41fa 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Nw9 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Nw9 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Nw9 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 924800 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 924800 ']' 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:22.261 00:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Je5 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lQw ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lQw 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EEb 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1V6 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1V6 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BSN 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.puD ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.puD 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OE4 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.g5d ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.g5d 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Nw9 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:22.520 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:22.521 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:22.521 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:22.521 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:22.779 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:22.779 00:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:23.712 Waiting for block devices as requested 00:32:23.712 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:23.712 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:23.712 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:23.970 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:23.970 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:23.970 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.228 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:24.228 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:24.228 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:24.487 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:24.487 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:24.487 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:24.745 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:24.745 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.745 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.745 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.003 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:25.261 00:12:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:25.519 No valid GPT data, bailing 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:25.519 00:32:25.519 Discovery Log Number of Records 2, Generation counter 2 00:32:25.519 =====Discovery Log Entry 0====== 00:32:25.519 trtype: tcp 00:32:25.519 adrfam: ipv4 00:32:25.519 subtype: current discovery subsystem 00:32:25.519 treq: not specified, sq flow control disable supported 00:32:25.519 portid: 1 00:32:25.519 trsvcid: 4420 00:32:25.519 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:25.519 traddr: 10.0.0.1 00:32:25.519 eflags: none 00:32:25.519 sectype: none 00:32:25.519 =====Discovery Log Entry 1====== 00:32:25.519 trtype: tcp 00:32:25.519 adrfam: ipv4 00:32:25.519 subtype: nvme subsystem 00:32:25.519 treq: not specified, sq flow control disable supported 00:32:25.519 portid: 1 00:32:25.519 trsvcid: 4420 00:32:25.519 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:25.519 traddr: 10.0.0.1 00:32:25.519 eflags: none 00:32:25.519 sectype: none 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:25.519 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.520 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.778 nvme0n1 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.778 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.779 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.037 nvme0n1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.037 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.295 nvme0n1 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.295 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.296 nvme0n1 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.296 00:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.554 nvme0n1 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.554 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.812 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.812 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 nvme0n1 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.071 nvme0n1 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.071 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.072 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.330 nvme0n1 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.330 00:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.330 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.588 nvme0n1 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:27.588 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.589 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.846 nvme0n1 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.846 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 nvme0n1 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:28.103 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.104 00:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 nvme0n1 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.668 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.669 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.669 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.669 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.669 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.669 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.926 nvme0n1 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.926 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.927 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 nvme0n1 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.445 nvme0n1 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.445 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.732 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.992 nvme0n1 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.992 00:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.558 nvme0n1 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:30.558 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.559 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.124 nvme0n1 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.124 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.125 00:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.690 nvme0n1 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.690 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.691 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.256 nvme0n1 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.256 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.514 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.515 00:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.515 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.080 nvme0n1 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.080 00:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.024 nvme0n1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.024 00:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 nvme0n1 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.959 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.216 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.217 00:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.148 nvme0n1 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.148 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.149 00:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.078 nvme0n1 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.078 00:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 nvme0n1 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.010 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.011 nvme0n1 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.011 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.269 nvme0n1 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.269 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.270 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.527 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.527 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.527 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.527 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.527 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.528 00:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.528 nvme0n1 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.528 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.785 nvme0n1 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.785 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.786 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 nvme0n1 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.043 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.300 nvme0n1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.300 00:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.558 nvme0n1 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.558 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 nvme0n1 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.816 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.073 nvme0n1 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.331 nvme0n1 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.331 00:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.588 nvme0n1 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.588 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.589 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.152 nvme0n1 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.152 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.410 nvme0n1 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.410 00:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.410 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 nvme0n1 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.926 nvme0n1 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.926 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.184 00:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.748 nvme0n1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.748 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.311 nvme0n1 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:43.311 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.312 00:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.875 nvme0n1 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.875 00:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.439 nvme0n1 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.439 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.695 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.951 nvme0n1 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.951 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.208 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.209 00:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.140 nvme0n1 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.140 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.141 00:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.072 nvme0n1 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.072 00:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.039 nvme0n1 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.039 00:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.970 nvme0n1 00:32:48.970 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.970 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.970 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.970 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.970 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.971 00:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.903 nvme0n1 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.903 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.162 nvme0n1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.162 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 nvme0n1 00:32:50.420 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.420 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.420 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.420 00:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.420 00:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.420 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.678 nvme0n1 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.678 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 nvme0n1 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 nvme0n1 00:32:50.936 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.194 nvme0n1 00:32:51.194 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:51.452 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.453 00:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 nvme0n1 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.711 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.969 nvme0n1 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:51.969 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.970 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 nvme0n1 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.228 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.487 nvme0n1 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.487 00:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.487 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.746 nvme0n1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.746 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.005 nvme0n1 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.005 00:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.571 nvme0n1 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.571 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.830 nvme0n1 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.830 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.088 nvme0n1 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.088 00:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.654 nvme0n1 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.654 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.912 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.478 nvme0n1 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.478 00:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.478 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.479 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.479 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.479 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.045 nvme0n1 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.045 00:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.612 nvme0n1 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.612 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.179 nvme0n1 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiMDQ5YjQ5NjczYWE3YmQ4MjYwM2NmNTFlZTZmM2PLcZH8: 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzZDkwNTgxM2UzMjkyMTA3YmUxOWE3YTM3MGQ3M2NlZTkwNWRmMDc4MWQ4NGYwZGUxNTc0ODQ3OTYyNzM0NPqsKL0=: 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.179 00:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.113 nvme0n1 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:58.113 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.371 00:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.306 nvme0n1 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmM1ZmY0MDc0ZGZkYTIxYTY0YzI1YWM0YjE5Y2M3MDF8adTL: 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: ]] 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTgxMmFjMWM2ZTdiZTk4MmZlNjViMDZkY2U2NjYzYjA6fFYi: 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.306 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.307 00:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.241 nvme0n1 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTliN2RjOTM3YzI1NTgxYzFmN2NlMzc0Mjk2ZWJjNGY2YjNlNDVmMTM3MTJlIFNBSQ==: 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTJkZWZhNjYwZTkwMDk2NTYxOTU2MDk1MTQ4Y2UzNDgAnkHm: 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.241 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.242 00:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 nvme0n1 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU3ZWFmYjI3MjEzNjEzNzc3ZWQzMGE0ZDk1ZDQ1OGQ3YTI5OGQyZTQzNzE2ODFkYTJiZGRhNjVlOGZhNDFmYegdpk4=: 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.175 00:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.109 nvme0n1 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZjYmEzMGM5ZjBmOTVjZmQxMDQyNDBlNjljNDAxNGYzNzlmNDU0NmM2ZjI1NDg2NQ7xFg==: 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzc2ZTQyOWY1YTQ3NDU1NDhiMjE3M2Q5MWFmMjdkYzc2MGI5NmQxYjAzNjQ3NDZmPVvkRw==: 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.109 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.110 request: 00:33:02.110 { 00:33:02.110 "name": "nvme0", 00:33:02.110 "trtype": "tcp", 00:33:02.110 "traddr": "10.0.0.1", 00:33:02.110 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:02.110 "adrfam": "ipv4", 00:33:02.110 "trsvcid": "4420", 00:33:02.110 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:02.110 "method": "bdev_nvme_attach_controller", 00:33:02.110 "req_id": 1 00:33:02.110 } 00:33:02.110 Got JSON-RPC error response 00:33:02.110 response: 00:33:02.110 { 00:33:02.110 "code": -5, 00:33:02.110 "message": "Input/output error" 00:33:02.110 } 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:02.110 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.368 request: 00:33:02.368 { 00:33:02.368 "name": "nvme0", 00:33:02.368 "trtype": "tcp", 00:33:02.368 "traddr": "10.0.0.1", 00:33:02.368 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:02.368 "adrfam": "ipv4", 00:33:02.368 "trsvcid": "4420", 00:33:02.368 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:02.368 "dhchap_key": "key2", 00:33:02.368 "method": "bdev_nvme_attach_controller", 00:33:02.368 "req_id": 1 00:33:02.368 } 00:33:02.368 Got JSON-RPC error response 00:33:02.368 response: 00:33:02.368 { 00:33:02.368 "code": -5, 00:33:02.368 "message": "Input/output error" 00:33:02.368 } 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:02.368 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.369 00:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.369 request: 00:33:02.369 { 00:33:02.369 "name": "nvme0", 00:33:02.369 "trtype": "tcp", 00:33:02.369 "traddr": "10.0.0.1", 00:33:02.369 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:02.369 "adrfam": "ipv4", 00:33:02.369 "trsvcid": "4420", 00:33:02.369 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:02.369 "dhchap_key": "key1", 00:33:02.369 "dhchap_ctrlr_key": "ckey2", 00:33:02.369 "method": "bdev_nvme_attach_controller", 00:33:02.369 "req_id": 1 00:33:02.369 } 00:33:02.369 Got JSON-RPC error response 00:33:02.369 response: 00:33:02.369 { 00:33:02.369 "code": -5, 00:33:02.369 "message": "Input/output error" 00:33:02.369 } 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:02.369 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:02.369 rmmod nvme_tcp 00:33:02.369 rmmod nvme_fabrics 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 924800 ']' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 924800 ']' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 924800' 00:33:02.628 killing process with pid 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 924800 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.628 00:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:05.162 00:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:06.126 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:06.126 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:06.126 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:06.385 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:06.385 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:07.324 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:07.324 00:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Je5 /tmp/spdk.key-null.EEb /tmp/spdk.key-sha256.BSN /tmp/spdk.key-sha384.OE4 /tmp/spdk.key-sha512.Nw9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:07.324 00:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:08.258 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:08.258 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:08.258 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:08.258 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:08.258 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:08.258 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:08.258 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:08.258 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:08.258 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:08.258 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:08.258 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:08.258 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:08.258 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:08.258 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:08.258 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:08.258 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:08.258 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:08.517 00:33:08.517 real 0m49.341s 00:33:08.517 user 0m46.797s 00:33:08.517 sys 0m5.800s 00:33:08.517 00:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:08.517 00:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.517 ************************************ 00:33:08.517 END TEST nvmf_auth_host 00:33:08.517 ************************************ 00:33:08.517 00:13:14 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:08.517 00:13:14 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:08.517 00:13:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:08.517 00:13:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:08.517 00:13:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:08.517 ************************************ 00:33:08.517 START TEST nvmf_digest 00:33:08.517 ************************************ 00:33:08.517 00:13:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:08.776 * Looking for test storage... 00:33:08.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:08.776 00:13:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:10.676 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:10.676 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:10.676 Found net devices under 0000:09:00.0: cvl_0_0 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:10.676 Found net devices under 0000:09:00.1: cvl_0_1 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:10.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:33:10.676 00:33:10.676 --- 10.0.0.2 ping statistics --- 00:33:10.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.676 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:33:10.676 00:33:10.676 --- 10.0.0.1 ping statistics --- 00:33:10.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.676 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:10.676 00:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.934 ************************************ 00:33:10.934 START TEST nvmf_digest_clean 00:33:10.934 ************************************ 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=934223 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 934223 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 934223 ']' 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:10.934 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:10.934 [2024-07-25 00:13:16.462076] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:10.934 [2024-07-25 00:13:16.462188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.934 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.934 [2024-07-25 00:13:16.526754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.934 [2024-07-25 00:13:16.615625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.934 [2024-07-25 00:13:16.615707] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.934 [2024-07-25 00:13:16.615720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.934 [2024-07-25 00:13:16.615731] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.934 [2024-07-25 00:13:16.615741] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.934 [2024-07-25 00:13:16.615772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:11.192 null0 00:33:11.192 [2024-07-25 00:13:16.816073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.192 [2024-07-25 00:13:16.840289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=934253 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 934253 /var/tmp/bperf.sock 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 934253 ']' 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:11.192 00:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:11.192 [2024-07-25 00:13:16.889006] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:11.192 [2024-07-25 00:13:16.889080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934253 ] 00:33:11.449 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.450 [2024-07-25 00:13:16.955192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.450 [2024-07-25 00:13:17.047792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.450 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:11.450 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:11.450 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:11.450 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:11.450 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:11.707 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:11.707 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.270 nvme0n1 00:33:12.270 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:12.270 00:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.528 Running I/O for 2 seconds... 00:33:14.427 00:33:14.427 Latency(us) 00:33:14.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.427 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:14.427 nvme0n1 : 2.00 19688.62 76.91 0.00 0.00 6491.99 3446.71 19126.80 00:33:14.427 =================================================================================================================== 00:33:14.427 Total : 19688.62 76.91 0.00 0.00 6491.99 3446.71 19126.80 00:33:14.427 0 00:33:14.427 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:14.427 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:14.427 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:14.427 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:14.427 | select(.opcode=="crc32c") 00:33:14.427 | "\(.module_name) \(.executed)"' 00:33:14.427 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 934253 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 934253 ']' 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 934253 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 934253 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 934253' 00:33:14.685 killing process with pid 934253 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 934253 00:33:14.685 Received shutdown signal, test time was about 2.000000 seconds 00:33:14.685 00:33:14.685 Latency(us) 00:33:14.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.685 =================================================================================================================== 00:33:14.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.685 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 934253 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=934661 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 934661 /var/tmp/bperf.sock 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 934661 ']' 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:14.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:14.944 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:14.944 [2024-07-25 00:13:20.624328] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:14.944 [2024-07-25 00:13:20.624423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934661 ] 00:33:14.944 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:14.944 Zero copy mechanism will not be used. 00:33:14.944 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.202 [2024-07-25 00:13:20.687517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.202 [2024-07-25 00:13:20.777718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.202 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:15.202 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:15.202 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:15.202 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:15.202 00:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:15.768 00:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.768 00:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.027 nvme0n1 00:33:16.027 00:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:16.027 00:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.027 Zero copy mechanism will not be used. 00:33:16.027 Running I/O for 2 seconds... 00:33:18.557 00:33:18.557 Latency(us) 00:33:18.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:18.557 nvme0n1 : 2.00 2285.22 285.65 0.00 0.00 6996.92 5971.06 10437.21 00:33:18.557 =================================================================================================================== 00:33:18.557 Total : 2285.22 285.65 0.00 0.00 6996.92 5971.06 10437.21 00:33:18.557 0 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:18.557 | select(.opcode=="crc32c") 00:33:18.557 | "\(.module_name) \(.executed)"' 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 934661 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 934661 ']' 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 934661 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 934661 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 934661' 00:33:18.557 killing process with pid 934661 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 934661 00:33:18.557 Received shutdown signal, test time was about 2.000000 seconds 00:33:18.557 00:33:18.557 Latency(us) 00:33:18.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.557 =================================================================================================================== 00:33:18.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.557 00:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 934661 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=935079 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 935079 /var/tmp/bperf.sock 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 935079 ']' 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.557 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:18.558 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.558 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:18.558 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.558 [2024-07-25 00:13:24.244050] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:18.558 [2024-07-25 00:13:24.244149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935079 ] 00:33:18.816 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.816 [2024-07-25 00:13:24.305056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.816 [2024-07-25 00:13:24.396285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.816 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:18.816 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:18.816 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:18.816 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:18.816 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:19.382 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.382 00:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.640 nvme0n1 00:33:19.640 00:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:19.640 00:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.640 Running I/O for 2 seconds... 00:33:22.169 00:33:22.169 Latency(us) 00:33:22.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.169 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.169 nvme0n1 : 2.00 21187.94 82.77 0.00 0.00 6030.91 2475.80 10874.12 00:33:22.169 =================================================================================================================== 00:33:22.169 Total : 21187.94 82.77 0.00 0.00 6030.91 2475.80 10874.12 00:33:22.169 0 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:22.169 | select(.opcode=="crc32c") 00:33:22.169 | "\(.module_name) \(.executed)"' 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 935079 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 935079 ']' 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 935079 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 935079 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 935079' 00:33:22.169 killing process with pid 935079 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 935079 00:33:22.169 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.169 00:33:22.169 Latency(us) 00:33:22.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.169 =================================================================================================================== 00:33:22.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 935079 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=935591 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 935591 /var/tmp/bperf.sock 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 935591 ']' 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.169 00:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.427 [2024-07-25 00:13:27.893070] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:22.427 [2024-07-25 00:13:27.893163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935591 ] 00:33:22.427 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.428 Zero copy mechanism will not be used. 00:33:22.428 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.428 [2024-07-25 00:13:27.950911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.428 [2024-07-25 00:13:28.038294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.428 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:22.428 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:22.428 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.428 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.428 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.994 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.994 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.252 nvme0n1 00:33:23.252 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.252 00:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.252 Zero copy mechanism will not be used. 00:33:23.252 Running I/O for 2 seconds... 00:33:25.785 00:33:25.785 Latency(us) 00:33:25.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.785 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:25.785 nvme0n1 : 2.01 1639.72 204.97 0.00 0.00 9722.27 4199.16 13883.92 00:33:25.785 =================================================================================================================== 00:33:25.785 Total : 1639.72 204.97 0.00 0.00 9722.27 4199.16 13883.92 00:33:25.785 0 00:33:25.785 00:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.785 00:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.785 00:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.785 00:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.785 00:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.785 | select(.opcode=="crc32c") 00:33:25.785 | "\(.module_name) \(.executed)"' 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 935591 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 935591 ']' 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 935591 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 935591 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 935591' 00:33:25.785 killing process with pid 935591 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 935591 00:33:25.785 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.785 00:33:25.785 Latency(us) 00:33:25.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.785 =================================================================================================================== 00:33:25.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.785 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 935591 00:33:26.043 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 934223 00:33:26.043 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 934223 ']' 00:33:26.043 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 934223 00:33:26.043 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 934223 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 934223' 00:33:26.044 killing process with pid 934223 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 934223 00:33:26.044 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 934223 00:33:26.301 00:33:26.301 real 0m15.365s 00:33:26.301 user 0m30.785s 00:33:26.301 sys 0m3.888s 00:33:26.301 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:26.301 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:26.301 ************************************ 00:33:26.301 END TEST nvmf_digest_clean 00:33:26.301 ************************************ 00:33:26.301 00:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:26.301 00:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:26.301 00:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:26.302 ************************************ 00:33:26.302 START TEST nvmf_digest_error 00:33:26.302 ************************************ 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=936025 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 936025 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 936025 ']' 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.302 00:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.302 [2024-07-25 00:13:31.880242] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:26.302 [2024-07-25 00:13:31.880321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.302 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.302 [2024-07-25 00:13:31.947280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.559 [2024-07-25 00:13:32.038287] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.559 [2024-07-25 00:13:32.038330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.559 [2024-07-25 00:13:32.038346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.559 [2024-07-25 00:13:32.038359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.559 [2024-07-25 00:13:32.038369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.559 [2024-07-25 00:13:32.038415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.559 [2024-07-25 00:13:32.102940] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.559 null0 00:33:26.559 [2024-07-25 00:13:32.219896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.559 [2024-07-25 00:13:32.244114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=936164 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 936164 /var/tmp/bperf.sock 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 936164 ']' 00:33:26.559 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.560 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.560 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.560 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.560 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.835 [2024-07-25 00:13:32.290018] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:26.835 [2024-07-25 00:13:32.290094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936164 ] 00:33:26.835 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.835 [2024-07-25 00:13:32.350191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.835 [2024-07-25 00:13:32.442569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.104 00:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.669 nvme0n1 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:27.669 00:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.927 Running I/O for 2 seconds... 00:33:27.927 [2024-07-25 00:13:33.451481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.451556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.465382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.465415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.465432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.477246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.477275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.477291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.492841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.492877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.492896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.508893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.508928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.508947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.520418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.520452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.520471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.534603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.534637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.534663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.548756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.548790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.548808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.560934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.560968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.560986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.576482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.576535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.591061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.591095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.591124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.605075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.605119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.605155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.618126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.618171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.618188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.927 [2024-07-25 00:13:33.630840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:27.927 [2024-07-25 00:13:33.630875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.927 [2024-07-25 00:13:33.630893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.644752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.644789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.644809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.661069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.661123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.661158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.674549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.674583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.674601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.687762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.687796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.687814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.701656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.701709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.715822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.715857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.715876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.727934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.727967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.727986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.742837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.742872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.742890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.757722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.757757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.757777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.770039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.770074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.784094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.784161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.797980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.798014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.811515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.811549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.811567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.823049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.823083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.823108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.839066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.839100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.839128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.853467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.853501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.853520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.865170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.865198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.865213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.879680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.879716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.879735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.185 [2024-07-25 00:13:33.894371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.185 [2024-07-25 00:13:33.894407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-25 00:13:33.894425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.907556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.907593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.907613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.921459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.921493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.934008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.934042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.934061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.946910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.946963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.960806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.960841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.960859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.972811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.972862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:33.989886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:33.989921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:33.989940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:34.000662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:34.000697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:34.000715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:34.015174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:34.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:34.015236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:34.030261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:34.030290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:34.030306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:34.044518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:34.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.443 [2024-07-25 00:13:34.044572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.443 [2024-07-25 00:13:34.056839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.443 [2024-07-25 00:13:34.056874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.056893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.071091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.071149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.071168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.083860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.083914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.098479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.098513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.098532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.113279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.113310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.113327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.125670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.125704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.125732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.138860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.138914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.444 [2024-07-25 00:13:34.153854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.444 [2024-07-25 00:13:34.153888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.444 [2024-07-25 00:13:34.153906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.165656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.165693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.165713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.180059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.180093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.180122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.193219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.193251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.193268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.207784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.207819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.207838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.219157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.219187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.219204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.233496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.233531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.233550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.248283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.248318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.248336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.260377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.260422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.260441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.277401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.277429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.290170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.290203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.290222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.302834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.302868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.302886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.315651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.315684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.315703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-07-25 00:13:34.329571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.702 [2024-07-25 00:13:34.329605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-07-25 00:13:34.329624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.344898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.344931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.344950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.356188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.356233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.356250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.370413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.370466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.383686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.383720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.383739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.396757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.396791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.396810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-07-25 00:13:34.411195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.703 [2024-07-25 00:13:34.411224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-07-25 00:13:34.411239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.423412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.423461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.423482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.439201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.439230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.439246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.453851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.453886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.453905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.466647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.466681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.479754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.479794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.494616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.494651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.494670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.508395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.508426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.508442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.520549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.520585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.535620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.535655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.535674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.550921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.550975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.563687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.563721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.563739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.575708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.575741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.575759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.591794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.591829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.591848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.605575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.605609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.618245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.618274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.618290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.632596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.632629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.632648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.961 [2024-07-25 00:13:34.645362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.961 [2024-07-25 00:13:34.645394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.961 [2024-07-25 00:13:34.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.962 [2024-07-25 00:13:34.658634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.962 [2024-07-25 00:13:34.658668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.962 [2024-07-25 00:13:34.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.962 [2024-07-25 00:13:34.673075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:28.962 [2024-07-25 00:13:34.673144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.962 [2024-07-25 00:13:34.673175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.219 [2024-07-25 00:13:34.686973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.219 [2024-07-25 00:13:34.687009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.219 [2024-07-25 00:13:34.687030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.219 [2024-07-25 00:13:34.701001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.701035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.701055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.713078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.713119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.713160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.727623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.727657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.727677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.740064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.740098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.754478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.754512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.754531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.768629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.768681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.780522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.780557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.780575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.793979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.794013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.794031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.808556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.808590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.808610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.820151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.820184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.820202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.836028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.836067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.836086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.850129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.850178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.850197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.863235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.863265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.863295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.878087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.878130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.878174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.893610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.893644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.893663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.906753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.906786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.906805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.920155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.920186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.920206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.220 [2024-07-25 00:13:34.931902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.220 [2024-07-25 00:13:34.931938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.220 [2024-07-25 00:13:34.931957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:34.947690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:34.947726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:34.947756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:34.960001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:34.960035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:34.960054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:34.974784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:34.974818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:34.974837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:34.989716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:34.989750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:34.989770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.001005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.001056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.015391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.015437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.015456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.029894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.029926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.029950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.043166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.043196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.055565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.055598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.055617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.071288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.071316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.071342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.083256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.083286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.083304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.097100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.097142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.097175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.113191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.113225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.113243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.124664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.124698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.124716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.139386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.139416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.139437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.155123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.155169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.166185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.166218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.166236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.478 [2024-07-25 00:13:35.181453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.478 [2024-07-25 00:13:35.181486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.478 [2024-07-25 00:13:35.181504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.196411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.196449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.196469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.208560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.208595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.208614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.221260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.221294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.221313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.235510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.235545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.235564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.249039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.249073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.249092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.263477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.263510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.263529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.277273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.277307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.277325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.289964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.289999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.290017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.304018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.304081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.315314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.315341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.315361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.330171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.330206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.330225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.345301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.345332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.736 [2024-07-25 00:13:35.345348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.736 [2024-07-25 00:13:35.361678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.736 [2024-07-25 00:13:35.361713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.361731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.374553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.374605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.387862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.387896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.387915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.400998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.401031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.401050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.414315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.414346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.414362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.430080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.430144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.430163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.737 [2024-07-25 00:13:35.440989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198b8d0) 00:33:29.737 [2024-07-25 00:13:35.441024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.737 [2024-07-25 00:13:35.441042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.994 00:33:29.994 Latency(us) 00:33:29.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:29.994 nvme0n1 : 2.04 18223.39 71.19 0.00 0.00 6874.91 3883.61 45826.65 00:33:29.994 =================================================================================================================== 00:33:29.994 Total : 18223.39 71.19 0.00 0.00 6874.91 3883.61 45826.65 00:33:29.994 0 00:33:29.994 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:29.994 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:29.994 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:29.994 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:29.994 | .driver_specific 00:33:29.994 | .nvme_error 00:33:29.994 | .status_code 00:33:29.994 | .command_transient_transport_error' 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 936164 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 936164 ']' 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 936164 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 936164 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 936164' 00:33:30.252 killing process with pid 936164 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 936164 00:33:30.252 Received shutdown signal, test time was about 2.000000 seconds 00:33:30.252 00:33:30.252 Latency(us) 00:33:30.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.252 =================================================================================================================== 00:33:30.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.252 00:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 936164 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=936578 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 936578 /var/tmp/bperf.sock 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 936578 ']' 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:30.510 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.510 [2024-07-25 00:13:36.063792] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:30.510 [2024-07-25 00:13:36.063891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936578 ] 00:33:30.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.510 Zero copy mechanism will not be used. 00:33:30.510 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.510 [2024-07-25 00:13:36.127547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.510 [2024-07-25 00:13:36.217988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.768 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:30.768 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:30.768 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:30.768 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.025 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.282 nvme0n1 00:33:31.282 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:31.282 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.282 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.540 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.540 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:31.540 00:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.540 Zero copy mechanism will not be used. 00:33:31.540 Running I/O for 2 seconds... 00:33:31.540 [2024-07-25 00:13:37.124979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.125046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.125069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.137212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.137244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.137261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.146740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.146771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.156290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.156320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.156348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.167030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.167061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.167085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.178613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.178648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.178673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.540 [2024-07-25 00:13:37.190573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.540 [2024-07-25 00:13:37.190608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.540 [2024-07-25 00:13:37.190631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.541 [2024-07-25 00:13:37.202692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.541 [2024-07-25 00:13:37.202727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.541 [2024-07-25 00:13:37.202760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.541 [2024-07-25 00:13:37.213936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.541 [2024-07-25 00:13:37.213966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.541 [2024-07-25 00:13:37.213985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.541 [2024-07-25 00:13:37.224363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.541 [2024-07-25 00:13:37.224402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.541 [2024-07-25 00:13:37.224419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.541 [2024-07-25 00:13:37.234907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.541 [2024-07-25 00:13:37.234941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.541 [2024-07-25 00:13:37.234959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.541 [2024-07-25 00:13:37.245361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.541 [2024-07-25 00:13:37.245391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.541 [2024-07-25 00:13:37.245407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.256489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.256522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.256539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.267926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.267958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.267975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.278818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.278847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.278868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.289660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.289695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.289713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.300978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.301014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.301032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.312160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.312191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.312210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.323550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.323584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.323603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.334472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.334505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.334524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.346409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.346439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.346456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.357914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.357945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.357964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.369970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.370004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.370031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.380416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.380465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.380484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.392098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.392151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.799 [2024-07-25 00:13:37.392188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.799 [2024-07-25 00:13:37.403195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.799 [2024-07-25 00:13:37.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.414724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.414758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.414777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.425923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.425957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.425977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.436800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.436834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.436852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.447962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.447991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.448012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.459083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.459119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.459138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.470166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.470197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.481516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.481570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.491710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.491754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.491782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.502896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.502928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.800 [2024-07-25 00:13:37.514228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:31.800 [2024-07-25 00:13:37.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.800 [2024-07-25 00:13:37.514289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.526575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.526611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.058 [2024-07-25 00:13:37.526630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.538206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.538236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.058 [2024-07-25 00:13:37.538253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.550740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.550775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.058 [2024-07-25 00:13:37.550794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.561788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.561822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.058 [2024-07-25 00:13:37.561841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.573018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.573052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.058 [2024-07-25 00:13:37.573072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.058 [2024-07-25 00:13:37.584136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.058 [2024-07-25 00:13:37.584166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.595388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.595443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.595464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.607079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.607122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.607156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.618663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.618698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.618717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.629623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.629658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.629677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.641302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.641332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.641348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.653557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.664473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.664508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.664526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.676030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.676078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.676098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.686962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.687024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.697869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.697904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.697923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.708902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.708934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.708950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.719715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.719747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.730723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.730757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.730790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.742594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.742631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.742650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.752994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.753029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.753047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.059 [2024-07-25 00:13:37.764113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.059 [2024-07-25 00:13:37.764152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.059 [2024-07-25 00:13:37.764168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.317 [2024-07-25 00:13:37.774919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.317 [2024-07-25 00:13:37.774973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.317 [2024-07-25 00:13:37.775006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.317 [2024-07-25 00:13:37.786028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.317 [2024-07-25 00:13:37.786069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.317 [2024-07-25 00:13:37.786088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.317 [2024-07-25 00:13:37.797046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.317 [2024-07-25 00:13:37.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.317 [2024-07-25 00:13:37.797094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.317 [2024-07-25 00:13:37.808841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.317 [2024-07-25 00:13:37.808890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.808911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.820637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.820704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.832154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.832204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.832223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.842793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.842825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.842843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.853216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.853260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.853277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.864172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.864203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.864219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.874408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.874455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.874474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.885709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.885742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.885776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.897089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.897145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.897174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.909547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.909597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.909617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.923116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.923152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.923174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.936008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.936044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.936063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.949190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.949242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.949259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.961056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.961092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.961120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.973735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.973806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:37.987123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:37.987171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:37.987193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:38.001163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:38.001193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:38.001209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:38.014049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:38.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:38.014110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.318 [2024-07-25 00:13:38.025851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.318 [2024-07-25 00:13:38.025901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.318 [2024-07-25 00:13:38.025920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.037972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.038009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.038028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.050153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.050200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.050217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.063044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.063076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.063093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.074599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.074634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.074653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.085912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.085946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.085965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.098190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.098240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.098257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.110546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.110596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.110615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.121472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.121506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.121525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.133243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.133279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.133298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.144606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.144642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.144661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.156737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.156773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.156792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.168058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.168093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.168121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.179217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.179246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.191485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.191532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.191548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.203735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.203785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.203805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.214933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.214967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.214986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.225736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.225771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.225790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.237121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.237168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.237184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.248394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.248424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.259696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.259741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.271142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.271190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.577 [2024-07-25 00:13:38.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.577 [2024-07-25 00:13:38.282368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.577 [2024-07-25 00:13:38.282399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.578 [2024-07-25 00:13:38.282415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.294303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.294334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.294377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.305321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.305352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.316270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.316299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.316330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.327843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.327879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.327898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.338874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.338904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.338920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.349085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.349142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.349162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.359420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.359466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.359482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.369968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.369999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.370031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.380448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.380478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.380494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.390457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.390486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.400274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.400303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.400319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.410218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.410247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.410277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.420409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.420454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.420470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.430435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.430480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.430496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.440440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.440469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.440499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.450424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.450454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.450488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.460418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.460462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.460481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.470356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.470401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.470423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.480274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.480318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.480334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.490574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.490620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.490637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.500504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.500547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.500567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.510403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.510435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.510466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.520461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.520506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.520523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.530746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.836 [2024-07-25 00:13:38.530789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.836 [2024-07-25 00:13:38.530806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.836 [2024-07-25 00:13:38.540982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.837 [2024-07-25 00:13:38.541010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.837 [2024-07-25 00:13:38.541041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.837 [2024-07-25 00:13:38.551219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:32.837 [2024-07-25 00:13:38.551260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.837 [2024-07-25 00:13:38.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.561380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.561430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.561446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.571537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.571566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.571598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.581828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.581857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.581873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.591811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.591839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.591871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.601596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.601654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.611455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.621447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.621492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.621511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.631373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.631403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.631419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.641435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.641480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.641496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.651323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.651351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.651381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.661384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.661411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.661442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.671399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.671427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.671443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.681817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.681845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.692077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.692112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.702253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.702283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.702314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.712294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.712324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.712341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.722335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.722364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.722395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.732380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.732410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.732447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.742508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.742551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.752456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.752501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.095 [2024-07-25 00:13:38.762331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.095 [2024-07-25 00:13:38.762360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.095 [2024-07-25 00:13:38.762376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.096 [2024-07-25 00:13:38.772370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.096 [2024-07-25 00:13:38.772417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.096 [2024-07-25 00:13:38.772433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.096 [2024-07-25 00:13:38.782462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.096 [2024-07-25 00:13:38.782489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.096 [2024-07-25 00:13:38.782520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.096 [2024-07-25 00:13:38.792511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.096 [2024-07-25 00:13:38.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.096 [2024-07-25 00:13:38.792567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.096 [2024-07-25 00:13:38.802660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.096 [2024-07-25 00:13:38.802688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.096 [2024-07-25 00:13:38.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.812802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.812830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.812860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.823258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.823291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.823308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.833386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.833414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.833445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.843423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.843467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.843483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.853371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.853399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.853428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.863404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.863433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.863449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.873525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.873569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.873586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.883320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.354 [2024-07-25 00:13:38.883348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.354 [2024-07-25 00:13:38.883382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.354 [2024-07-25 00:13:38.893497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.893524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.893539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.903663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.903692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.903707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.913644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.913672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.913703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.923739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.923801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.934078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.934115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.934133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.944058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.954081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.954119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.963861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.963904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.963920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.973799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.973842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.973858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.984044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.984072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.984112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:38.993970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:38.994019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:38.994036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.003936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.003979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.003994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.014285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.014314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.014345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.024417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.024446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.024462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.034397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.034440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.034456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.044607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.044635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.054553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.054581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.054596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.355 [2024-07-25 00:13:39.064505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.355 [2024-07-25 00:13:39.064549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.355 [2024-07-25 00:13:39.064565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.613 [2024-07-25 00:13:39.074547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.613 [2024-07-25 00:13:39.074579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.614 [2024-07-25 00:13:39.074604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.614 [2024-07-25 00:13:39.084530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.614 [2024-07-25 00:13:39.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.614 [2024-07-25 00:13:39.084592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.614 [2024-07-25 00:13:39.094419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.614 [2024-07-25 00:13:39.094461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.614 [2024-07-25 00:13:39.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.614 [2024-07-25 00:13:39.104707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.614 [2024-07-25 00:13:39.104737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.614 [2024-07-25 00:13:39.104766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.614 [2024-07-25 00:13:39.114584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15da2c0) 00:33:33.614 [2024-07-25 00:13:39.114613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.614 [2024-07-25 00:13:39.114630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.614 00:33:33.614 Latency(us) 00:33:33.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.614 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:33.614 nvme0n1 : 2.00 2853.16 356.65 0.00 0.00 5601.74 1565.58 14175.19 00:33:33.614 =================================================================================================================== 00:33:33.614 Total : 2853.16 356.65 0.00 0.00 5601.74 1565.58 14175.19 00:33:33.614 0 00:33:33.614 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:33.614 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:33.614 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:33.614 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:33.614 | .driver_specific 00:33:33.614 | .nvme_error 00:33:33.614 | .status_code 00:33:33.614 | .command_transient_transport_error' 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 184 > 0 )) 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 936578 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 936578 ']' 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 936578 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 936578 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 936578' 00:33:33.872 killing process with pid 936578 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 936578 00:33:33.872 Received shutdown signal, test time was about 2.000000 seconds 00:33:33.872 00:33:33.872 Latency(us) 00:33:33.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.872 =================================================================================================================== 00:33:33.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.872 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 936578 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=936982 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 936982 /var/tmp/bperf.sock 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 936982 ']' 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:34.130 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.130 [2024-07-25 00:13:39.646187] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:34.130 [2024-07-25 00:13:39.646281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936982 ] 00:33:34.130 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.130 [2024-07-25 00:13:39.703282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.130 [2024-07-25 00:13:39.795369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.388 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.388 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:34.388 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.388 00:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.646 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.904 nvme0n1 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:34.904 00:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.904 Running I/O for 2 seconds... 00:33:35.163 [2024-07-25 00:13:40.621360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.621674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.621715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.635684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.635958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.635992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.649895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.650176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.650205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.664151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.664568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.664602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.678365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.678655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.678687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.692505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.692776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.692809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.706595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.706863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.706895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.720509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.720778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.734356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.734639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.734670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.748261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.748570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.762360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.762647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.762679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.776171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.776585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.789181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.789406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.789448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.802430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.802730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.802757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.815642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.815883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.828793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.829059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.829087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.842111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.842395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.842438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.855266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.855535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.855562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.163 [2024-07-25 00:13:40.868470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.163 [2024-07-25 00:13:40.868691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.163 [2024-07-25 00:13:40.868715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.881922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.882181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.895165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.895425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.908594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.908843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.921991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.922271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.922299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.935343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.935687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.948659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.948885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.948911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.961955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.962263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.962291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.975189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.975401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.975439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:40.988451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.422 [2024-07-25 00:13:40.988757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.422 [2024-07-25 00:13:40.988785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.422 [2024-07-25 00:13:41.001624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.001845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.001872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.014870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.015089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.015138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.027872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.028123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.028159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.041033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.041287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.041314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.054289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.054566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.054593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.067491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.067708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.067736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.080726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.080972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.080999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.093973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.094283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.094311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.107272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.107535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.120554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.120761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.120802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.423 [2024-07-25 00:13:41.133842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.423 [2024-07-25 00:13:41.134086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.423 [2024-07-25 00:13:41.134123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.681 [2024-07-25 00:13:41.147169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.681 [2024-07-25 00:13:41.147422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.147450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.160296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.160607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.160641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.173583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.173787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.173829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.186915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.187159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.200260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.200515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.200542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.213488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.213709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.226667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.226887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.226914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.239915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.240228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.240256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.253226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.253452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.266491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.266713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.266740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.279750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.279979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.280005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.293073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.293408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.293434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.306346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.306581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.306607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.319667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.319928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.319954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.332938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.333160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.333185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.346074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.346385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.346427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.359401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.359660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.359687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.372694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.372982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.373010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.682 [2024-07-25 00:13:41.385986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.682 [2024-07-25 00:13:41.386220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.682 [2024-07-25 00:13:41.386247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.399214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.399444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.399472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.412434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.412658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.412686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.425595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.425818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.425845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.438816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.439037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.439064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.452023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.452351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.452380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.465281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.465602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.465629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.478508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.478727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.478752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.491656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.491861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.491888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.504905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.505131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.505182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.518301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.518541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.518567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.531628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.531832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.531858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.544766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.544985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.545011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.557891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.558161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.571038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.571361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.571404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.584251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.584462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.597509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.597726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.610769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.611023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.623936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.624175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.637182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.941 [2024-07-25 00:13:41.637438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:35.941 [2024-07-25 00:13:41.650226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:35.941 [2024-07-25 00:13:41.650450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.942 [2024-07-25 00:13:41.650494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.199 [2024-07-25 00:13:41.663647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.199 [2024-07-25 00:13:41.663870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.199 [2024-07-25 00:13:41.663899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.199 [2024-07-25 00:13:41.676938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.199 [2024-07-25 00:13:41.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.199 [2024-07-25 00:13:41.677180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.199 [2024-07-25 00:13:41.690249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.690476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.690503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.703539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.703758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.703784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.716879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.717121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.717149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.730233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.730484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.730516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.744230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.744473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.744499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.758201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.758445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.758476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.772173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.772426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.772457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.785982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.786212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.786240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.799805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.800093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.800145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.813850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.814082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.814147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.827825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.828062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.828108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.841835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.842073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.842112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.855827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.856060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.856112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.869792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.870060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.870091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.883704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.883939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.883970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.897590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.897826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.897857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.200 [2024-07-25 00:13:41.911461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.200 [2024-07-25 00:13:41.911707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.200 [2024-07-25 00:13:41.911748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.925595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.925832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.925865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.939569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.939833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.939863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.953576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.953839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.953870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.967577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.967813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.967843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.981551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.981826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.981857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:41.995555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:41.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:41.995820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.009503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.009768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.009799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.023488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.023723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.023754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.037382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.037643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.051390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.051715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.065460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.065697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.079395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.079647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.079679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.093365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.093608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.458 [2024-07-25 00:13:42.093640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.458 [2024-07-25 00:13:42.107346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.458 [2024-07-25 00:13:42.107592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.459 [2024-07-25 00:13:42.107624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.459 [2024-07-25 00:13:42.121347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.459 [2024-07-25 00:13:42.121596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.459 [2024-07-25 00:13:42.121627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.459 [2024-07-25 00:13:42.135340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.459 [2024-07-25 00:13:42.135587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.459 [2024-07-25 00:13:42.135619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.459 [2024-07-25 00:13:42.149338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.459 [2024-07-25 00:13:42.149582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.459 [2024-07-25 00:13:42.149611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.459 [2024-07-25 00:13:42.162995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.459 [2024-07-25 00:13:42.163255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.459 [2024-07-25 00:13:42.163282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.177068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.177329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.177360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.191117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.191365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.191393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.205063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.205320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.205347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.219015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.219371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.219421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.232970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.233223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.233253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.246903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.247154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.247183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.260825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.261095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.716 [2024-07-25 00:13:42.261149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.716 [2024-07-25 00:13:42.274770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.716 [2024-07-25 00:13:42.275002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.275033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.288721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.289000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.289031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.302623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.302861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.302892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.316607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.316885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.316916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.330548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.330854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.344507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.344744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.344781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.358525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.358762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.358793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.372601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.372835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.372867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.386531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.386765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.386797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.400440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.400674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.400704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.414338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.414584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.414614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.717 [2024-07-25 00:13:42.428167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.717 [2024-07-25 00:13:42.428397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.717 [2024-07-25 00:13:42.428425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.974 [2024-07-25 00:13:42.442389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.974 [2024-07-25 00:13:42.442645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.974 [2024-07-25 00:13:42.442677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.974 [2024-07-25 00:13:42.456411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.974 [2024-07-25 00:13:42.456660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.974 [2024-07-25 00:13:42.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.974 [2024-07-25 00:13:42.470370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.974 [2024-07-25 00:13:42.470616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.974 [2024-07-25 00:13:42.470646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.974 [2024-07-25 00:13:42.484308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.974 [2024-07-25 00:13:42.484552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.974 [2024-07-25 00:13:42.484583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.974 [2024-07-25 00:13:42.498215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.974 [2024-07-25 00:13:42.498449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.498480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.512183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.512409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.512456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.526073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.526348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.526374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.540083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.540336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.540363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.554003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.554250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.554276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.567990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.568253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.568280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.581971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.582304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.582332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.595907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.596156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.596184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 [2024-07-25 00:13:42.609716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fdeb0 00:33:36.975 [2024-07-25 00:13:42.609950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.975 [2024-07-25 00:13:42.609981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.975 00:33:36.975 Latency(us) 00:33:36.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.975 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.975 nvme0n1 : 2.01 18729.32 73.16 0.00 0.00 6817.88 3907.89 14466.47 00:33:36.975 =================================================================================================================== 00:33:36.975 Total : 18729.32 73.16 0.00 0.00 6817.88 3907.89 14466.47 00:33:36.975 0 00:33:36.975 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:36.975 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:36.975 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:36.975 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:36.975 | .driver_specific 00:33:36.975 | .nvme_error 00:33:36.975 | .status_code 00:33:36.975 | .command_transient_transport_error' 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 936982 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 936982 ']' 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 936982 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 936982 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 936982' 00:33:37.233 killing process with pid 936982 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 936982 00:33:37.233 Received shutdown signal, test time was about 2.000000 seconds 00:33:37.233 00:33:37.233 Latency(us) 00:33:37.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.233 =================================================================================================================== 00:33:37.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.233 00:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 936982 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=937392 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 937392 /var/tmp/bperf.sock 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 937392 ']' 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.491 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.491 [2024-07-25 00:13:43.187992] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:37.491 [2024-07-25 00:13:43.188089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937392 ] 00:33:37.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.491 Zero copy mechanism will not be used. 00:33:37.759 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.759 [2024-07-25 00:13:43.249148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.759 [2024-07-25 00:13:43.341376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.759 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.759 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.759 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.759 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.022 00:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.585 nvme0n1 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:38.585 00:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.585 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.585 Zero copy mechanism will not be used. 00:33:38.585 Running I/O for 2 seconds... 00:33:38.585 [2024-07-25 00:13:44.265275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.585 [2024-07-25 00:13:44.265639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.585 [2024-07-25 00:13:44.265678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.585 [2024-07-25 00:13:44.279424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.585 [2024-07-25 00:13:44.279763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.585 [2024-07-25 00:13:44.279794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.585 [2024-07-25 00:13:44.294455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.585 [2024-07-25 00:13:44.294795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.585 [2024-07-25 00:13:44.294826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.308723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.309185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.323173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.323447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.323476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.337017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.337376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.350508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.350878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.350907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.364746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.365091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.378791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.379137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.392868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.393239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.393270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.407843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.408205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.408250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.420472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.420814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.420843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.434548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.434894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.434937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.449188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.449550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.449580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.463096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.463471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.463501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.477160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.477501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.477529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.490857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.843 [2024-07-25 00:13:44.491220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.843 [2024-07-25 00:13:44.491250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.843 [2024-07-25 00:13:44.504214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.844 [2024-07-25 00:13:44.504581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.844 [2024-07-25 00:13:44.504623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.844 [2024-07-25 00:13:44.518286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.844 [2024-07-25 00:13:44.518671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.844 [2024-07-25 00:13:44.518728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.844 [2024-07-25 00:13:44.533704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.844 [2024-07-25 00:13:44.534066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.844 [2024-07-25 00:13:44.534121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.844 [2024-07-25 00:13:44.547431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:38.844 [2024-07-25 00:13:44.547652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.844 [2024-07-25 00:13:44.547679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.561694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.562175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.575209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.575579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.575624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.589180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.589544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.589588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.602753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.603090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.603138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.615858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.616225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.616270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.629367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.629734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.629762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.642729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.643075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.643130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.655285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.655616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.655644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.668861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.669092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.669129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.682227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.102 [2024-07-25 00:13:44.682589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.102 [2024-07-25 00:13:44.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.102 [2024-07-25 00:13:44.697076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.697430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.697457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.711157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.711524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.711566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.725806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.726180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.726210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.740195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.740592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.754019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.754419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.754464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.769686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.770138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.783281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.783650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.797578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.797928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.797957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.103 [2024-07-25 00:13:44.811929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.103 [2024-07-25 00:13:44.812285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.103 [2024-07-25 00:13:44.812313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.825727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.826067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.826097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.839834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.840207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.853955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.854329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.854372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.868189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.868556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.868585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.882865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.883244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.883276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.896962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.897314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.897344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.910321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.910593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.910621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.361 [2024-07-25 00:13:44.924547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.361 [2024-07-25 00:13:44.924889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.361 [2024-07-25 00:13:44.924918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:44.938611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:44.938955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:44.938998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:44.951776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:44.952143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:44.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:44.965801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:44.966177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:44.966225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:44.980284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:44.980589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:44.980619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:44.994573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:44.994911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:44.994940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:45.009060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:45.009445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:45.009490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:45.021635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:45.021980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:45.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:45.035388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:45.035765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:45.035811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:45.049731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:45.049943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:45.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.362 [2024-07-25 00:13:45.064522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.362 [2024-07-25 00:13:45.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.362 [2024-07-25 00:13:45.064884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.077761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.078145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.078191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.092000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.092380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.105334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.105696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.105725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.117368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.117635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.117666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.129938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.130430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.130461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.143775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.144222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.157315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.157911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.157947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.169621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.170214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.182713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.183116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.183145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.195214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.195706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.195741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.208575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.209070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.209119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.222174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.222665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.222694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.235912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.236434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.236463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.249425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.249934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.249961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.263520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.263930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.263958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.277114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.277630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.277658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.291299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.291669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.291696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.304625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.304969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.304996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.317781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.318228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.318257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.620 [2024-07-25 00:13:45.330861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.620 [2024-07-25 00:13:45.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.620 [2024-07-25 00:13:45.331344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.878 [2024-07-25 00:13:45.343400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.878 [2024-07-25 00:13:45.343817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.878 [2024-07-25 00:13:45.343846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.357002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.357599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.357628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.371135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.371608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.371635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.384527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.384973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.385001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.397877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.398360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.398388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.410934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.411329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.411357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.423362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.423849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.423892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.436189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.436554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.436600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.448937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.449479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.463203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.463585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.476532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.476904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.476932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.489039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.489544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.489572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.502554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.502991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.503019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.515957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.516478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.516506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.529190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.529624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.529652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.541226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.541733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.554548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.554955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.554996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.568531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.569039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.569066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.879 [2024-07-25 00:13:45.582532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:39.879 [2024-07-25 00:13:45.582944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.879 [2024-07-25 00:13:45.582971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.595689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.596061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.596089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.609488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.609928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.609957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.622868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.623348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.623378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.636806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.637164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.637192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.650055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.663758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.664179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.664207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.676290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.676761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.137 [2024-07-25 00:13:45.676789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.137 [2024-07-25 00:13:45.689679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.137 [2024-07-25 00:13:45.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.690131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.703440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.703893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.703920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.717383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.717786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.717828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.730874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.731264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.731292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.742486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.742857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.742885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.754621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.754982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.755010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.767359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.767834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.780888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.781377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.781407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.794938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.795479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.808309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.808903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.808930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.820751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.821187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.821215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.834184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.834708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.834744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.138 [2024-07-25 00:13:45.847183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.138 [2024-07-25 00:13:45.847590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.138 [2024-07-25 00:13:45.847618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.860032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.860583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.860613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.873894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.874321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.887236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.887673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.887721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.900277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.900734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.900762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.913560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.913979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.914006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.927065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.927526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.927554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.941385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.941778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.954658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.955168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.955200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.967522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.967959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.967987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.979345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.979718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.979745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:45.992653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:45.993176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:45.993205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:46.006130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:46.006609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.396 [2024-07-25 00:13:46.006636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.396 [2024-07-25 00:13:46.019230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.396 [2024-07-25 00:13:46.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.019685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.033028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.033480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.033509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.045687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.046151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.046180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.059110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.059480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.059508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.072223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.072536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.072564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.085617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.086038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.086065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.397 [2024-07-25 00:13:46.099350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.397 [2024-07-25 00:13:46.099817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.397 [2024-07-25 00:13:46.099845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-25 00:13:46.113907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.114501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.114553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.127026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.127523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.127552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.141029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.141550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.141579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.155098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.155634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.155662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.168493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.168818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.182140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.182623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.182650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.196109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.196631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.196673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.209111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.209512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.209540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.222558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.223036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.223064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.236914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.237313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.237343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.655 [2024-07-25 00:13:46.250047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:33:40.655 [2024-07-25 00:13:46.250364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.655 [2024-07-25 00:13:46.250393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.655 00:33:40.655 Latency(us) 00:33:40.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.655 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:40.655 nvme0n1 : 2.01 2279.52 284.94 0.00 0.00 7001.62 5121.52 15243.19 00:33:40.655 =================================================================================================================== 00:33:40.655 Total : 2279.52 284.94 0.00 0.00 7001.62 5121.52 15243.19 00:33:40.655 0 00:33:40.655 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:40.655 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:40.655 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:40.655 | .driver_specific 00:33:40.655 | .nvme_error 00:33:40.655 | .status_code 00:33:40.655 | .command_transient_transport_error' 00:33:40.655 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 937392 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 937392 ']' 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 937392 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 937392 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 937392' 00:33:40.913 killing process with pid 937392 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 937392 00:33:40.913 Received shutdown signal, test time was about 2.000000 seconds 00:33:40.913 00:33:40.913 Latency(us) 00:33:40.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.913 =================================================================================================================== 00:33:40.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.913 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 937392 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 936025 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 936025 ']' 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 936025 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 936025 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 936025' 00:33:41.171 killing process with pid 936025 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 936025 00:33:41.171 00:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 936025 00:33:41.430 00:33:41.430 real 0m15.237s 00:33:41.430 user 0m30.289s 00:33:41.430 sys 0m4.053s 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.430 ************************************ 00:33:41.430 END TEST nvmf_digest_error 00:33:41.430 ************************************ 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.430 rmmod nvme_tcp 00:33:41.430 rmmod nvme_fabrics 00:33:41.430 rmmod nvme_keyring 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 936025 ']' 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 936025 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 936025 ']' 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 936025 00:33:41.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (936025) - No such process 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 936025 is not found' 00:33:41.430 Process with pid 936025 is not found 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:41.430 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:41.688 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.688 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:41.688 00:13:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.688 00:13:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.688 00:13:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.627 00:13:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:43.627 00:33:43.627 real 0m34.992s 00:33:43.627 user 1m1.878s 00:33:43.627 sys 0m9.512s 00:33:43.627 00:13:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:43.627 00:13:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:43.627 ************************************ 00:33:43.627 END TEST nvmf_digest 00:33:43.627 ************************************ 00:33:43.627 00:13:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:43.627 00:13:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:43.627 00:13:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:43.627 00:13:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:43.627 00:13:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:43.627 00:13:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:43.627 00:13:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.627 ************************************ 00:33:43.627 START TEST nvmf_bdevperf 00:33:43.627 ************************************ 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:43.627 * Looking for test storage... 00:33:43.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.627 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:43.628 00:13:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:45.528 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:45.529 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:45.529 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:45.529 Found net devices under 0000:09:00.0: cvl_0_0 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:45.529 Found net devices under 0000:09:00.1: cvl_0_1 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.529 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:45.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:33:45.786 00:33:45.786 --- 10.0.0.2 ping statistics --- 00:33:45.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.786 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:45.786 00:33:45.786 --- 10.0.0.1 ping statistics --- 00:33:45.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.786 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:45.786 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=939739 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 939739 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 939739 ']' 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:45.787 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:45.787 [2024-07-25 00:13:51.441518] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:45.787 [2024-07-25 00:13:51.441601] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.787 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.044 [2024-07-25 00:13:51.508845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.044 [2024-07-25 00:13:51.601750] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.044 [2024-07-25 00:13:51.601796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.044 [2024-07-25 00:13:51.601809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.044 [2024-07-25 00:13:51.601821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.044 [2024-07-25 00:13:51.601831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.044 [2024-07-25 00:13:51.601929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.044 [2024-07-25 00:13:51.601993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.044 [2024-07-25 00:13:51.601996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.044 [2024-07-25 00:13:51.748561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.044 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 Malloc0 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 [2024-07-25 00:13:51.806715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:46.302 { 00:33:46.302 "params": { 00:33:46.302 "name": "Nvme$subsystem", 00:33:46.302 "trtype": "$TEST_TRANSPORT", 00:33:46.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.302 "adrfam": "ipv4", 00:33:46.302 "trsvcid": "$NVMF_PORT", 00:33:46.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.302 "hdgst": ${hdgst:-false}, 00:33:46.302 "ddgst": ${ddgst:-false} 00:33:46.302 }, 00:33:46.302 "method": "bdev_nvme_attach_controller" 00:33:46.302 } 00:33:46.302 EOF 00:33:46.302 )") 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:46.302 00:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:46.302 "params": { 00:33:46.302 "name": "Nvme1", 00:33:46.302 "trtype": "tcp", 00:33:46.302 "traddr": "10.0.0.2", 00:33:46.302 "adrfam": "ipv4", 00:33:46.302 "trsvcid": "4420", 00:33:46.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.302 "hdgst": false, 00:33:46.302 "ddgst": false 00:33:46.302 }, 00:33:46.302 "method": "bdev_nvme_attach_controller" 00:33:46.302 }' 00:33:46.302 [2024-07-25 00:13:51.856555] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:46.302 [2024-07-25 00:13:51.856619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939882 ] 00:33:46.302 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.302 [2024-07-25 00:13:51.915971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.302 [2024-07-25 00:13:52.009714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.560 Running I/O for 1 seconds... 00:33:47.493 00:33:47.493 Latency(us) 00:33:47.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:47.493 Verification LBA range: start 0x0 length 0x4000 00:33:47.493 Nvme1n1 : 1.00 8855.26 34.59 0.00 0.00 14394.95 1231.83 16408.27 00:33:47.493 =================================================================================================================== 00:33:47.493 Total : 8855.26 34.59 0.00 0.00 14394.95 1231.83 16408.27 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=940026 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:47.752 { 00:33:47.752 "params": { 00:33:47.752 "name": "Nvme$subsystem", 00:33:47.752 "trtype": "$TEST_TRANSPORT", 00:33:47.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.752 "adrfam": "ipv4", 00:33:47.752 "trsvcid": "$NVMF_PORT", 00:33:47.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.752 "hdgst": ${hdgst:-false}, 00:33:47.752 "ddgst": ${ddgst:-false} 00:33:47.752 }, 00:33:47.752 "method": "bdev_nvme_attach_controller" 00:33:47.752 } 00:33:47.752 EOF 00:33:47.752 )") 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:47.752 00:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:47.752 "params": { 00:33:47.752 "name": "Nvme1", 00:33:47.752 "trtype": "tcp", 00:33:47.752 "traddr": "10.0.0.2", 00:33:47.752 "adrfam": "ipv4", 00:33:47.752 "trsvcid": "4420", 00:33:47.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:47.752 "hdgst": false, 00:33:47.752 "ddgst": false 00:33:47.752 }, 00:33:47.752 "method": "bdev_nvme_attach_controller" 00:33:47.752 }' 00:33:47.752 [2024-07-25 00:13:53.464588] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:47.752 [2024-07-25 00:13:53.464665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940026 ] 00:33:48.010 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.010 [2024-07-25 00:13:53.523214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.010 [2024-07-25 00:13:53.611043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.267 Running I/O for 15 seconds... 00:33:50.798 00:13:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 939739 00:33:50.798 00:13:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:50.798 [2024-07-25 00:13:56.433579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.798 [2024-07-25 00:13:56.433629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.798 [2024-07-25 00:13:56.433664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.433965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.433983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.799 [2024-07-25 00:13:56.434945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.799 [2024-07-25 00:13:56.434960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.434977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.800 [2024-07-25 00:13:56.435476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.435980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.800 [2024-07-25 00:13:56.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.800 [2024-07-25 00:13:56.436235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.436977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.436994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.801 [2024-07-25 00:13:56.437502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.801 [2024-07-25 00:13:56.437519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.802 [2024-07-25 00:13:56.437929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.437945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3150 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.437964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:50.802 [2024-07-25 00:13:56.437977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:50.802 [2024-07-25 00:13:56.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27280 len:8 PRP1 0x0 PRP2 0x0 00:33:50.802 [2024-07-25 00:13:56.438004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.802 [2024-07-25 00:13:56.438074] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20e3150 was disconnected and freed. reset controller. 00:33:50.802 [2024-07-25 00:13:56.441892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.441968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.442692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-25 00:13:56.442722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:50.802 [2024-07-25 00:13:56.442739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.442993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.443256] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.802 [2024-07-25 00:13:56.443281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.802 [2024-07-25 00:13:56.443299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.802 [2024-07-25 00:13:56.446906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.802 [2024-07-25 00:13:56.456229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.456658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-25 00:13:56.456691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:50.802 [2024-07-25 00:13:56.456710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.456951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.457210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.802 [2024-07-25 00:13:56.457236] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.802 [2024-07-25 00:13:56.457252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.802 [2024-07-25 00:13:56.460835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.802 [2024-07-25 00:13:56.470151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.470595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-25 00:13:56.470627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:50.802 [2024-07-25 00:13:56.470646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.470885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.471141] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.802 [2024-07-25 00:13:56.471167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.802 [2024-07-25 00:13:56.471183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.802 [2024-07-25 00:13:56.474769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.802 [2024-07-25 00:13:56.484085] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.484527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-25 00:13:56.484559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:50.802 [2024-07-25 00:13:56.484577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.484817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.485063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.802 [2024-07-25 00:13:56.485088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.802 [2024-07-25 00:13:56.485113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.802 [2024-07-25 00:13:56.488702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.802 [2024-07-25 00:13:56.498014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.498469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-25 00:13:56.498500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:50.802 [2024-07-25 00:13:56.498519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:50.802 [2024-07-25 00:13:56.498758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:50.802 [2024-07-25 00:13:56.499003] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.802 [2024-07-25 00:13:56.499027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.802 [2024-07-25 00:13:56.499044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.802 [2024-07-25 00:13:56.502639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.802 [2024-07-25 00:13:56.512090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.802 [2024-07-25 00:13:56.512531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.512585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.512616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.512952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.513215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.513241] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.513258] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.516840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.526002] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.526434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.526469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.526488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.526734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.526980] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.527005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.527022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.530613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.539931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.540359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.540393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.540412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.540652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.540897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.540922] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.540938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.544527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.553835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.554266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.554295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.554311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.554555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.554799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.554824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.554841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.558430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.567724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.568171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.568203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.568221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.568460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.568705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.568730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.568752] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.572344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.581660] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.582097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.582136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.582155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.582394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.582638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.582663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.582679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.586268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.595568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.595984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.596016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.596034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.062 [2024-07-25 00:13:56.596284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.062 [2024-07-25 00:13:56.596530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.062 [2024-07-25 00:13:56.596555] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.062 [2024-07-25 00:13:56.596571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.062 [2024-07-25 00:13:56.600156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.062 [2024-07-25 00:13:56.609457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.062 [2024-07-25 00:13:56.609905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-07-25 00:13:56.609937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.062 [2024-07-25 00:13:56.609956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.610206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.610451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.610475] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.610492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.614069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.623380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.623798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.623835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.623855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.624095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.624351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.624375] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.624392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.627973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.637287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.637735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.637767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.637786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.638025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.638281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.638306] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.638323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.641903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.651213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.651662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.651694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.651712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.651951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.652207] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.652232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.652248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.655828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.665140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.665575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.665607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.665625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.665863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.666123] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.666148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.666164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.669741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.679047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.679485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.679512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.679529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.679764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.680009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.680034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.680050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.683634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.693007] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.693469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.693502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.693520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.693759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.694003] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.694028] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.694045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.697636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.706960] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.707410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.707442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.707460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.707700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.707944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.707969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.707985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.711576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.720898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.721342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.721370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.721389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.721651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.721895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.721920] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.721936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.725526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.734826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.735288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-07-25 00:13:56.735317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.063 [2024-07-25 00:13:56.735346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.063 [2024-07-25 00:13:56.735599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.063 [2024-07-25 00:13:56.735843] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.063 [2024-07-25 00:13:56.735868] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.063 [2024-07-25 00:13:56.735885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.063 [2024-07-25 00:13:56.739474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.063 [2024-07-25 00:13:56.748792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.063 [2024-07-25 00:13:56.749207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-07-25 00:13:56.749239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.064 [2024-07-25 00:13:56.749268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.064 [2024-07-25 00:13:56.749507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.064 [2024-07-25 00:13:56.749751] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.064 [2024-07-25 00:13:56.749776] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.064 [2024-07-25 00:13:56.749793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.064 [2024-07-25 00:13:56.753393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.064 [2024-07-25 00:13:56.762708] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.064 [2024-07-25 00:13:56.763122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-07-25 00:13:56.763154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.064 [2024-07-25 00:13:56.763177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.064 [2024-07-25 00:13:56.763416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.064 [2024-07-25 00:13:56.763660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.064 [2024-07-25 00:13:56.763685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.064 [2024-07-25 00:13:56.763702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.064 [2024-07-25 00:13:56.767289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.064 [2024-07-25 00:13:56.776726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.064 [2024-07-25 00:13:56.777158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-07-25 00:13:56.777205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.777239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.777548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.777797] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.777823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.777840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.781433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.790592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.791051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.791085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.791114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.791358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.791603] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.791629] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.791645] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.795251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.804559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.805000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.805032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.805050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.805300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.805545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.805576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.805594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.809176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.818480] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.818922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.818954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.818972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.819222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.819477] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.819502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.819518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.823113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.832424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.832842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.832874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.832898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.833149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.833393] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.833418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.833434] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.837011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.846319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.846836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.846864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.846880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.847149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.847394] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.847419] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.847436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.851016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.860327] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.860767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.860799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.860817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.861056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.861310] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.861335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.861352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.864930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.874242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.874685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.874716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.874733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.874972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.875227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.875252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.875269] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.878847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.888175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.888606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.323 [2024-07-25 00:13:56.888638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.323 [2024-07-25 00:13:56.888667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.323 [2024-07-25 00:13:56.888907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.323 [2024-07-25 00:13:56.889175] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.323 [2024-07-25 00:13:56.889199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.323 [2024-07-25 00:13:56.889213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.323 [2024-07-25 00:13:56.892835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.323 [2024-07-25 00:13:56.902224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.323 [2024-07-25 00:13:56.902721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.902749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.902764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.903018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.903272] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.903295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.903309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.906934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.916242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.916676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.916708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.916726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.916965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.917227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.917250] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.917264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.920843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.930156] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.930579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.930605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.930623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.930857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.931111] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.931136] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.931153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.934730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.944029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.944452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.944493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.944511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.944754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.944998] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.945023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.945047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.948635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.957949] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.958394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.958425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.958443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.958682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.958926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.958951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.958967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.962559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.971863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.972303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.972335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.972353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.972592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.972837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.972862] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.972878] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.976468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.985761] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:56.986205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:56.986236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:56.986254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:56.986493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:56.986738] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:56.986763] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:56.986778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:56.990366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:56.999665] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:57.000113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:57.000144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:57.000162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:57.000401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:57.000645] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:57.000670] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:57.000686] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:57.004273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:57.013574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:57.014013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:57.014040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:57.014056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:57.014337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:57.014583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.324 [2024-07-25 00:13:57.014608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.324 [2024-07-25 00:13:57.014624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.324 [2024-07-25 00:13:57.018209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.324 [2024-07-25 00:13:57.027503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.324 [2024-07-25 00:13:57.027940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.324 [2024-07-25 00:13:57.027971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.324 [2024-07-25 00:13:57.027989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.324 [2024-07-25 00:13:57.028240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.324 [2024-07-25 00:13:57.028485] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.325 [2024-07-25 00:13:57.028511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.325 [2024-07-25 00:13:57.028527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.325 [2024-07-25 00:13:57.032114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.041641] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.042113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.042148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.042167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.042407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.042658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.042684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.042700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.046351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.055653] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.056112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.056147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.056165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.056407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.056652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.056676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.056693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.060288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.069586] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.070171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.070204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.070223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.070463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.070708] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.070732] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.070749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.074327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.083627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.084070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.084098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.084149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.084412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.084656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.084681] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.084697] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.088288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.097583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.098031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.098063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.098081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.098330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.098575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.098600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.098616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.102199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.111517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.111973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.112001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.112018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.112289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.112534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.112560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.112577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.116174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.125478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.126008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.126063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.126081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.126329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.126575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.126601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.126617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.130200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.139508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.139956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.139994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.140013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.140267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.140514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.140540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.140557] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.144143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.153443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.153910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.584 [2024-07-25 00:13:57.153938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.584 [2024-07-25 00:13:57.153955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.584 [2024-07-25 00:13:57.154220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.584 [2024-07-25 00:13:57.154465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.584 [2024-07-25 00:13:57.154491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.584 [2024-07-25 00:13:57.154507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.584 [2024-07-25 00:13:57.158089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.584 [2024-07-25 00:13:57.167399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.584 [2024-07-25 00:13:57.167825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.167857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.167875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.168128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.168374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.168401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.168418] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.172000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.181314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.181762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.181794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.181813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.182053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.182316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.182343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.182360] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.185943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.195257] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.195694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.195736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.195754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.195992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.196251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.196278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.196295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.199880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.209222] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.209672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.209700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.209716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.209962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.210219] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.210245] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.210261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.213840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.223174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.223615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.223643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.223660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.223910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.224170] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.224197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.224213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.227798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.237135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.237575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.237607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.237625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.237864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.238120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.238145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.238162] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.241748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.251071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.251515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.251542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.251558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.251799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.252044] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.252070] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.252086] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.255679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.265038] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.265551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.265579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.265595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.265852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.266098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.266135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.266151] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.269738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.279063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.279523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.279556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.279580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.279820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.280066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.280091] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.280120] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.283711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.585 [2024-07-25 00:13:57.293029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.585 [2024-07-25 00:13:57.293485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.585 [2024-07-25 00:13:57.293517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.585 [2024-07-25 00:13:57.293535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.585 [2024-07-25 00:13:57.293774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.585 [2024-07-25 00:13:57.294019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.585 [2024-07-25 00:13:57.294044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.585 [2024-07-25 00:13:57.294060] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.585 [2024-07-25 00:13:57.297766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.845 [2024-07-25 00:13:57.307052] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.845 [2024-07-25 00:13:57.307466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.845 [2024-07-25 00:13:57.307502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.845 [2024-07-25 00:13:57.307522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.845 [2024-07-25 00:13:57.307762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.845 [2024-07-25 00:13:57.308007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.845 [2024-07-25 00:13:57.308034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.845 [2024-07-25 00:13:57.308050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.845 [2024-07-25 00:13:57.311651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.845 [2024-07-25 00:13:57.320970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.845 [2024-07-25 00:13:57.321431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.845 [2024-07-25 00:13:57.321464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.845 [2024-07-25 00:13:57.321483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.845 [2024-07-25 00:13:57.321722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.845 [2024-07-25 00:13:57.321966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.845 [2024-07-25 00:13:57.321997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.845 [2024-07-25 00:13:57.322015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.845 [2024-07-25 00:13:57.325613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.845 [2024-07-25 00:13:57.334924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.845 [2024-07-25 00:13:57.335359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.845 [2024-07-25 00:13:57.335388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.845 [2024-07-25 00:13:57.335419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.845 [2024-07-25 00:13:57.335667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.845 [2024-07-25 00:13:57.335912] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.845 [2024-07-25 00:13:57.335937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.845 [2024-07-25 00:13:57.335954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.845 [2024-07-25 00:13:57.339552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.845 [2024-07-25 00:13:57.348872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.845 [2024-07-25 00:13:57.349298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.845 [2024-07-25 00:13:57.349331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.845 [2024-07-25 00:13:57.349350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.845 [2024-07-25 00:13:57.349588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.845 [2024-07-25 00:13:57.349834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.845 [2024-07-25 00:13:57.349860] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.845 [2024-07-25 00:13:57.349876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.845 [2024-07-25 00:13:57.353470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.845 [2024-07-25 00:13:57.362791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.845 [2024-07-25 00:13:57.363252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.845 [2024-07-25 00:13:57.363285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.845 [2024-07-25 00:13:57.363304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.845 [2024-07-25 00:13:57.363543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.845 [2024-07-25 00:13:57.363789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.845 [2024-07-25 00:13:57.363814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.845 [2024-07-25 00:13:57.363831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.367422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.376742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.377168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.377196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.377212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.377449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.377694] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.377720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.377737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.381329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.390657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.391099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.391148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.391167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.391406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.391653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.391679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.391696] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.395291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.404602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.405044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.405073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.405090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.405360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.405605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.405631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.405648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.409236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.418541] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.418978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.419010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.419028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.419287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.419530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.419557] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.419574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.423162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.432468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.432881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.432913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.432932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.433184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.433429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.433455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.433471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.437052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.446365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.446779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.446812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.446830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.447070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.447325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.447352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.447369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.450949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.460380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.460806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.460839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.460857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.461097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.461352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.461378] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.461400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.464979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.474334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.474757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.474790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.474809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.475049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.475304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.475332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.475349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.478926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.488236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.488676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.488708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.488726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.846 [2024-07-25 00:13:57.488966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.846 [2024-07-25 00:13:57.489225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.846 [2024-07-25 00:13:57.489251] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.846 [2024-07-25 00:13:57.489268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.846 [2024-07-25 00:13:57.492848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.846 [2024-07-25 00:13:57.502161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.846 [2024-07-25 00:13:57.502601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.846 [2024-07-25 00:13:57.502630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.846 [2024-07-25 00:13:57.502647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.847 [2024-07-25 00:13:57.502896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.847 [2024-07-25 00:13:57.503170] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.847 [2024-07-25 00:13:57.503193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.847 [2024-07-25 00:13:57.503208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.847 [2024-07-25 00:13:57.506769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.847 [2024-07-25 00:13:57.516070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.847 [2024-07-25 00:13:57.516520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.847 [2024-07-25 00:13:57.516553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.847 [2024-07-25 00:13:57.516570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.847 [2024-07-25 00:13:57.516817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.847 [2024-07-25 00:13:57.517061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.847 [2024-07-25 00:13:57.517087] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.847 [2024-07-25 00:13:57.517115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.847 [2024-07-25 00:13:57.520697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.847 [2024-07-25 00:13:57.529994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.847 [2024-07-25 00:13:57.530455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.847 [2024-07-25 00:13:57.530484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.847 [2024-07-25 00:13:57.530500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.847 [2024-07-25 00:13:57.530750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.847 [2024-07-25 00:13:57.530994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.847 [2024-07-25 00:13:57.531020] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.847 [2024-07-25 00:13:57.531037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.847 [2024-07-25 00:13:57.534627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.847 [2024-07-25 00:13:57.543927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.847 [2024-07-25 00:13:57.544388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.847 [2024-07-25 00:13:57.544416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.847 [2024-07-25 00:13:57.544432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.847 [2024-07-25 00:13:57.544682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.847 [2024-07-25 00:13:57.544925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.847 [2024-07-25 00:13:57.544951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.847 [2024-07-25 00:13:57.544968] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.847 [2024-07-25 00:13:57.548559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.847 [2024-07-25 00:13:57.557935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.847 [2024-07-25 00:13:57.558393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.847 [2024-07-25 00:13:57.558428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:51.847 [2024-07-25 00:13:57.558448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:51.847 [2024-07-25 00:13:57.558738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:51.847 [2024-07-25 00:13:57.558995] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.847 [2024-07-25 00:13:57.559022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.847 [2024-07-25 00:13:57.559040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.106 [2024-07-25 00:13:57.562743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.106 [2024-07-25 00:13:57.571918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.106 [2024-07-25 00:13:57.572386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.106 [2024-07-25 00:13:57.572422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.106 [2024-07-25 00:13:57.572441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.106 [2024-07-25 00:13:57.572682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.106 [2024-07-25 00:13:57.572926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.106 [2024-07-25 00:13:57.572952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.106 [2024-07-25 00:13:57.572969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.106 [2024-07-25 00:13:57.576569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.106 [2024-07-25 00:13:57.585878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.106 [2024-07-25 00:13:57.586335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.106 [2024-07-25 00:13:57.586369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.106 [2024-07-25 00:13:57.586387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.106 [2024-07-25 00:13:57.586626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.106 [2024-07-25 00:13:57.586871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.106 [2024-07-25 00:13:57.586896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.106 [2024-07-25 00:13:57.586913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.106 [2024-07-25 00:13:57.590503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.106 [2024-07-25 00:13:57.599806] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.106 [2024-07-25 00:13:57.600222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.106 [2024-07-25 00:13:57.600254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.106 [2024-07-25 00:13:57.600273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.106 [2024-07-25 00:13:57.600513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.106 [2024-07-25 00:13:57.600757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.106 [2024-07-25 00:13:57.600784] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.106 [2024-07-25 00:13:57.600800] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.106 [2024-07-25 00:13:57.604399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.106 [2024-07-25 00:13:57.613705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.106 [2024-07-25 00:13:57.614157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.106 [2024-07-25 00:13:57.614187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.106 [2024-07-25 00:13:57.614204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.614462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.614707] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.614733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.614750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.618343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.627648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.628086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.628128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.628149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.628389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.628634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.628660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.628676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.632269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.641575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.642022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.642050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.642066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.642348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.642595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.642621] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.642638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.646226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.655555] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.655968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.656001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.656028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.656282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.656527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.656553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.656570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.660158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.669460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.669906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.669933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.669948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.670203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.670448] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.670474] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.670490] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.674075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.683401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.683818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.683851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.683871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.684123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.684368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.684395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.684411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.687996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.697317] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.697764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.697792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.697808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.698057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.698313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.698345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.698362] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.701943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.711260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.711772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.711804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.711822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.712061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.712383] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.712411] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.712428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.716013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.725121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.725560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.725592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.725611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.725850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.726093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.726132] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.726149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.729732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.107 [2024-07-25 00:13:57.739033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.107 [2024-07-25 00:13:57.739485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.107 [2024-07-25 00:13:57.739514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.107 [2024-07-25 00:13:57.739530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.107 [2024-07-25 00:13:57.739776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.107 [2024-07-25 00:13:57.740020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.107 [2024-07-25 00:13:57.740046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.107 [2024-07-25 00:13:57.740062] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.107 [2024-07-25 00:13:57.743659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.108 [2024-07-25 00:13:57.752974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.108 [2024-07-25 00:13:57.753422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.108 [2024-07-25 00:13:57.753455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.108 [2024-07-25 00:13:57.753474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.108 [2024-07-25 00:13:57.753713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.108 [2024-07-25 00:13:57.753959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.108 [2024-07-25 00:13:57.753985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.108 [2024-07-25 00:13:57.754001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.108 [2024-07-25 00:13:57.757594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.108 [2024-07-25 00:13:57.766904] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.108 [2024-07-25 00:13:57.767327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.108 [2024-07-25 00:13:57.767360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.108 [2024-07-25 00:13:57.767379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.108 [2024-07-25 00:13:57.767620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.108 [2024-07-25 00:13:57.767867] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.108 [2024-07-25 00:13:57.767893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.108 [2024-07-25 00:13:57.767910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.108 [2024-07-25 00:13:57.771507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.108 [2024-07-25 00:13:57.780821] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.108 [2024-07-25 00:13:57.781253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.108 [2024-07-25 00:13:57.781287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.108 [2024-07-25 00:13:57.781305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.108 [2024-07-25 00:13:57.781545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.108 [2024-07-25 00:13:57.781790] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.108 [2024-07-25 00:13:57.781817] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.108 [2024-07-25 00:13:57.781834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.108 [2024-07-25 00:13:57.785426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.108 [2024-07-25 00:13:57.794730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.108 [2024-07-25 00:13:57.795232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.108 [2024-07-25 00:13:57.795265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.108 [2024-07-25 00:13:57.795289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.108 [2024-07-25 00:13:57.795531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.108 [2024-07-25 00:13:57.795774] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.108 [2024-07-25 00:13:57.795800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.108 [2024-07-25 00:13:57.795817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.108 [2024-07-25 00:13:57.799410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.108 [2024-07-25 00:13:57.808725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.108 [2024-07-25 00:13:57.809137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.108 [2024-07-25 00:13:57.809179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.108 [2024-07-25 00:13:57.809199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.108 [2024-07-25 00:13:57.809439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.108 [2024-07-25 00:13:57.809683] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.108 [2024-07-25 00:13:57.809708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.108 [2024-07-25 00:13:57.809725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.108 [2024-07-25 00:13:57.813319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.822863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.823372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.823403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.823421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.823676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.823922] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.823947] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.823964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.827623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.836732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.837179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.837213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.837232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.837472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.837717] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.837747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.837765] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.841359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.850674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.851189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.851217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.851233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.851463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.851721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.851747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.851765] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.855358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.864686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.865128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.865162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.865180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.865420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.865665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.865691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.865708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.869324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.878650] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.879073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.879114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.879135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.879386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.879631] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.879656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.879673] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.883263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.892596] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.893177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.893211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.893230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.893470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.893715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.893741] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.893757] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.367 [2024-07-25 00:13:57.897328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.367 [2024-07-25 00:13:57.906535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.367 [2024-07-25 00:13:57.906985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.367 [2024-07-25 00:13:57.907018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.367 [2024-07-25 00:13:57.907036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.367 [2024-07-25 00:13:57.907300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.367 [2024-07-25 00:13:57.907554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.367 [2024-07-25 00:13:57.907580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.367 [2024-07-25 00:13:57.907597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.911202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.920514] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.921087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.921159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.921178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.921418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.921663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.921688] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.921705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.925292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.934380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.934828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.934857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.934874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.935144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.935389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.935416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.935432] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.939013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.948344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.948763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.948797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.948816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.949056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.949315] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.949341] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.949358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.952964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.962291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.962733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.962760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.962776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.963021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.963280] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.963307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.963323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.966903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.976245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.976749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.976777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.976793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.977039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.977309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.977336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.977369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.980955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:57.990270] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:57.990719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:57.990751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:57.990770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:57.991010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:57.991267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:57.991293] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:57.991310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:57.994889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.004205] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.004643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.004675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.004693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.004932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.005190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.005217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.005233] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.008813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.018120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.018567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.018599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.018617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.018856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.019100] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.019140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.019156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.022738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.032049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.032471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.032508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.032527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.032766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.033010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.033035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.033052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.036643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.045951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.046370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.046402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.046421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.046662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.046906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.046932] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.046948] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.050544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.059850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.060298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.060326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.060342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.060605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.060850] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.060877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.060894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.064484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.368 [2024-07-25 00:13:58.073789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.368 [2024-07-25 00:13:58.074234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.368 [2024-07-25 00:13:58.074263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.368 [2024-07-25 00:13:58.074279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.368 [2024-07-25 00:13:58.074526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.368 [2024-07-25 00:13:58.074776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.368 [2024-07-25 00:13:58.074803] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.368 [2024-07-25 00:13:58.074820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.368 [2024-07-25 00:13:58.078412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.087804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.088278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.088314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.088334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.088575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.088819] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.088847] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.088864] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.092458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.101760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.102211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.102241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.102258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.102510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.102755] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.102781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.102798] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.106390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.115702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.116155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.116188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.116206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.116446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.116690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.116715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.116731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.120327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.129634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.130080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.130132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.130149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.130409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.130665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.130690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.130706] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.134298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.143604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.144042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.144073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.144092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.144339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.144584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.144610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.144626] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.148210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.157518] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.157996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-25 00:13:58.158047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.628 [2024-07-25 00:13:58.158066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.628 [2024-07-25 00:13:58.158312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.628 [2024-07-25 00:13:58.158557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.628 [2024-07-25 00:13:58.158583] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.628 [2024-07-25 00:13:58.158599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.628 [2024-07-25 00:13:58.162188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.628 [2024-07-25 00:13:58.171498] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.628 [2024-07-25 00:13:58.171919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.171946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.171967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.172225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.172471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.172497] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.172513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.176098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.185266] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.185694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.185725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.185743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.185974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.186232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.186258] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.186273] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.189575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.198591] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.198976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.199004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.199020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.199267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.199474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.199497] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.199510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.202468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.211937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.212363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.212392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.212424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.212672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.212868] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.212894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.212908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.215938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.225234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.225655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.225685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.225701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.225952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.226188] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.226212] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.226227] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.229214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.238511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.238904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.238931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.238947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.239211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.239432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.239454] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.239469] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.242350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.251757] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.252189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.252216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.252232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.252464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.252659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.252680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.252693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.255680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.265090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.265512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.265540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.265556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.265793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.266003] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.266025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.266038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.269042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.278311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.278795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.278823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.278840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.279090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.279320] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.279343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.279357] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.629 [2024-07-25 00:13:58.282318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.629 [2024-07-25 00:13:58.291619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.629 [2024-07-25 00:13:58.292014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-25 00:13:58.292042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.629 [2024-07-25 00:13:58.292059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.629 [2024-07-25 00:13:58.292326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.629 [2024-07-25 00:13:58.292541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.629 [2024-07-25 00:13:58.292562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.629 [2024-07-25 00:13:58.292575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.630 [2024-07-25 00:13:58.295538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.630 [2024-07-25 00:13:58.304858] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.630 [2024-07-25 00:13:58.305305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-25 00:13:58.305333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.630 [2024-07-25 00:13:58.305350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.630 [2024-07-25 00:13:58.305589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.630 [2024-07-25 00:13:58.305784] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.630 [2024-07-25 00:13:58.305805] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.630 [2024-07-25 00:13:58.305819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.630 [2024-07-25 00:13:58.308824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.630 [2024-07-25 00:13:58.318202] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.630 [2024-07-25 00:13:58.318625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-25 00:13:58.318653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.630 [2024-07-25 00:13:58.318669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.630 [2024-07-25 00:13:58.318897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.630 [2024-07-25 00:13:58.319093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.630 [2024-07-25 00:13:58.319138] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.630 [2024-07-25 00:13:58.319152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.630 [2024-07-25 00:13:58.322137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.630 [2024-07-25 00:13:58.331463] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.630 [2024-07-25 00:13:58.331832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-25 00:13:58.331861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.630 [2024-07-25 00:13:58.331877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.630 [2024-07-25 00:13:58.332123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.630 [2024-07-25 00:13:58.332346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.630 [2024-07-25 00:13:58.332369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.630 [2024-07-25 00:13:58.332384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.630 [2024-07-25 00:13:58.335403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.889 [2024-07-25 00:13:58.344932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.889 [2024-07-25 00:13:58.345428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.889 [2024-07-25 00:13:58.345460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.889 [2024-07-25 00:13:58.345478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.889 [2024-07-25 00:13:58.345730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.889 [2024-07-25 00:13:58.345925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.889 [2024-07-25 00:13:58.345946] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.889 [2024-07-25 00:13:58.345968] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.889 [2024-07-25 00:13:58.349277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.889 [2024-07-25 00:13:58.358288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.889 [2024-07-25 00:13:58.358711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.889 [2024-07-25 00:13:58.358740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.889 [2024-07-25 00:13:58.358757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.889 [2024-07-25 00:13:58.359007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.889 [2024-07-25 00:13:58.359247] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.889 [2024-07-25 00:13:58.359269] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.889 [2024-07-25 00:13:58.359283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.889 [2024-07-25 00:13:58.362267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.889 [2024-07-25 00:13:58.371534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.889 [2024-07-25 00:13:58.371931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.371959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.371976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.372222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.372438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.372458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.372471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.375477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.384819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.385282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.385313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.385329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.385570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.385772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.385793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.385807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.389186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.398069] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.398456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.398485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.398501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.398738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.398934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.398954] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.398967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.401976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.411417] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.411893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.411921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.411937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.412182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.412382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.412403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.412416] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.415433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.424718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.425206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.425235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.425251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.425507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.425702] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.425723] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.425737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.428743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.437957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.438378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.438405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.438421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.438658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.438853] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.438874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.438887] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.441894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.451261] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.451682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.451711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.451727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.451979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.452203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.452225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.452238] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.455207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.464487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.464945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.464973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.464990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.465244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.465476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.465497] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.465510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.468480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.477751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.478168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.478197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.478214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.478475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.478685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.478706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.478724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.890 [2024-07-25 00:13:58.481876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.890 [2024-07-25 00:13:58.490975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.890 [2024-07-25 00:13:58.491404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.890 [2024-07-25 00:13:58.491434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.890 [2024-07-25 00:13:58.491450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.890 [2024-07-25 00:13:58.491705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.890 [2024-07-25 00:13:58.491901] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.890 [2024-07-25 00:13:58.491921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.890 [2024-07-25 00:13:58.491935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.494944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.504297] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.504787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.504816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.504832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.505081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.505303] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.505324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.505338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.508302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.517617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.518083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.518118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.518136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.518402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.518613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.518635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.518648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.521612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.530924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.531383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.531433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.531451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.531687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.531881] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.531902] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.531916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.534963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.544253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.544730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.544758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.544774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.545022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.545264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.545288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.545302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.548288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.557558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.557928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.557957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.557973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.558206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.558428] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.558450] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.558463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.561427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.570838] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.571233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.571262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.571278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.571528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.571729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.571751] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.571764] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.574735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.584081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.584506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.584535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.584552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.584806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.585001] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.585022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.585036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.588005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.891 [2024-07-25 00:13:58.597431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.891 [2024-07-25 00:13:58.597834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.891 [2024-07-25 00:13:58.597863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:52.891 [2024-07-25 00:13:58.597879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:52.891 [2024-07-25 00:13:58.598141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:52.891 [2024-07-25 00:13:58.598370] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.891 [2024-07-25 00:13:58.598410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.891 [2024-07-25 00:13:58.598424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.891 [2024-07-25 00:13:58.601607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.611072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.611543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.611575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.611594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.611833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.612046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.612067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.612095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.615164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.624279] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.624698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.624726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.624759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.625016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.625254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.625277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.625292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.628293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.637598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.638032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.638060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.638076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.638343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.638574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.638596] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.638609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.641573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.650823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.651285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.651315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.651332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.651589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.651783] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.651804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.651818] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.654822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.663997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.664426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.664455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.664478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.664735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.664932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.664953] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.664967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.667985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.677305] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.677678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.677706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.677722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.677940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.678180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.678203] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.678217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.681160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.690500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.690929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.690958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.690974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.691223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.691439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.691461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.691474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.694401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.703773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.704240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.704269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.151 [2024-07-25 00:13:58.704287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.151 [2024-07-25 00:13:58.704534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.151 [2024-07-25 00:13:58.704729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.151 [2024-07-25 00:13:58.704755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.151 [2024-07-25 00:13:58.704769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.151 [2024-07-25 00:13:58.707733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.151 [2024-07-25 00:13:58.717020] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.151 [2024-07-25 00:13:58.717444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.151 [2024-07-25 00:13:58.717473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.717490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.717740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.717934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.717956] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.717969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.720936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.730298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.730764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.730794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.730811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.731059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.731301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.731325] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.731340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.734363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.743624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.743987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.744014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.744030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.744295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.744508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.744530] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.744543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.747546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.756842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.757241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.757271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.757288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.757543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.757737] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.757758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.757772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.760738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.770067] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.770504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.770534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.770551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.770787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.770981] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.771002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.771016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.774028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.783223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.783643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.783672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.783688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.783937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.784158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.784195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.784210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.787194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.796467] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.796930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.796958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.796974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.797211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.797435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.797472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.797486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.800452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.809800] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.810202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.810231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.810248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.810501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.810696] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.810717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.810730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.813702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.823071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.823459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.823488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.823504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.823741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.823936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.823957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.823970] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.827016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.152 [2024-07-25 00:13:58.836292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.152 [2024-07-25 00:13:58.836711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.152 [2024-07-25 00:13:58.836740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.152 [2024-07-25 00:13:58.836756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.152 [2024-07-25 00:13:58.837009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.152 [2024-07-25 00:13:58.837249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.152 [2024-07-25 00:13:58.837273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.152 [2024-07-25 00:13:58.837292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.152 [2024-07-25 00:13:58.840280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.153 [2024-07-25 00:13:58.849448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.153 [2024-07-25 00:13:58.849850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.153 [2024-07-25 00:13:58.849879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.153 [2024-07-25 00:13:58.849896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.153 [2024-07-25 00:13:58.850162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.153 [2024-07-25 00:13:58.850394] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.153 [2024-07-25 00:13:58.850417] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.153 [2024-07-25 00:13:58.850431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.153 [2024-07-25 00:13:58.853394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.153 [2024-07-25 00:13:58.862772] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.153 [2024-07-25 00:13:58.863189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.153 [2024-07-25 00:13:58.863219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.153 [2024-07-25 00:13:58.863235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.153 [2024-07-25 00:13:58.863438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.153 [2024-07-25 00:13:58.863673] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.153 [2024-07-25 00:13:58.863712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.153 [2024-07-25 00:13:58.863728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.867223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.876127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.876606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.876638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.876655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.876921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.877142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.877165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.877180] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.880151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.889517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.889873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.889902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.889919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.890165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.890379] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.890402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.890434] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.893381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.902834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.903302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.903332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.903350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.903603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.903798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.903820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.903833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.906831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.916080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.916502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.916529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.916545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.916759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.916971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.916992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.917009] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.920032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.929458] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.929824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.929852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.929869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.930112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.930344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.930366] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.930380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.933437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.942777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.943174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.943203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.943220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.943467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.943681] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.943703] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.943717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.946727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.956048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.956465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.956494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.956510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.956745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.956955] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.956977] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.956991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.959959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.969375] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.969817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.969845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.969861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.970096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.970329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.970350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.970364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.973383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-25 00:13:58.982726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-25 00:13:58.983165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-25 00:13:58.983194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-25 00:13:58.983210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.413 [2024-07-25 00:13:58.983447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.413 [2024-07-25 00:13:58.983642] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-25 00:13:58.983663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-25 00:13:58.983676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-25 00:13:58.986683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:58.996074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:58.996526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:58.996554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:58.996570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:58.996806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:58.997016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:58.997037] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:58.997050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.000056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.009519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.009978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.010007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.010024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.010272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.010492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.010512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.010525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.013491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.022770] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.023298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.023335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.023353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.023603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.023798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.023820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.023834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.026887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.036109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.036590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.036618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.036634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.036868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.037063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.037098] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.037122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.040132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.049303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.049801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.049830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.049847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.050100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.050324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.050345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.050359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.053329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.062649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.063043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.063071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.063086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.063331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.063547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.063568] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.063581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.066626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.075929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.076319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.076347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.076364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.076609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.076805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.076825] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.076838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.079841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.089303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.089802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.089831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.089847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.090100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.090330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.090352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.090366] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.093317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.102673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.103021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.103048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.103065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.103288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.103501] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-25 00:13:59.103522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-25 00:13:59.103536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-25 00:13:59.106543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-25 00:13:59.116195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-25 00:13:59.116642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-25 00:13:59.116671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-25 00:13:59.116689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.414 [2024-07-25 00:13:59.116941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.414 [2024-07-25 00:13:59.117178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-25 00:13:59.117201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-25 00:13:59.117214] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-25 00:13:59.120203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.129732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.130221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.130254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.130272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.130527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.130722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.130743] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.130756] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.134075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.143124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.143472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.143501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.143517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.143733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.143942] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.143965] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.143978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.146983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.156468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.156817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.156845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.156866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.157099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.157329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.157352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.157367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.160392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.169793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.170255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.170272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.170514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.170724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.170745] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.170758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.173737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.183032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.183519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.183548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.183564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.183816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.184011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.184032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.184044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.187070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.196344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.196803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.196831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.196848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.674 [2024-07-25 00:13:59.197100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.674 [2024-07-25 00:13:59.197348] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-25 00:13:59.197377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-25 00:13:59.197392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-25 00:13:59.200728] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-25 00:13:59.209589] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-25 00:13:59.210050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-25 00:13:59.210078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-25 00:13:59.210094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.210359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.210570] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.210591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.210605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.213645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.222941] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.223362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.223390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.223407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.223638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.223833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.223853] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.223867] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.226895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.236247] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.236682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.236712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.236728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.236982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.237221] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.237245] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.237259] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.240247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.249618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.250026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.250055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.250072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.250312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.250548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.250570] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.250583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.253547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.262841] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.263302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.263331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.263348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.263605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.263800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.263821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.263835] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.266864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.276138] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.276506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.276535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.276551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.276791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.276986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.277007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.277020] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.280029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.289306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.289785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.289814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.289830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.290089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.290333] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.290356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.290370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.293353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.303211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.303637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.303669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.303688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.303926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.304192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.304216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.304230] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.307755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.317206] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.317657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.317685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.317702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.317951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.318216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.318240] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.318255] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.321834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.331127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-25 00:13:59.331573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-25 00:13:59.331604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-25 00:13:59.331623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.675 [2024-07-25 00:13:59.331861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.675 [2024-07-25 00:13:59.332117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-25 00:13:59.332143] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-25 00:13:59.332179] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-25 00:13:59.335735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-25 00:13:59.344983] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.676 [2024-07-25 00:13:59.345507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.676 [2024-07-25 00:13:59.345536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.676 [2024-07-25 00:13:59.345552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.676 [2024-07-25 00:13:59.345810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.676 [2024-07-25 00:13:59.346054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.676 [2024-07-25 00:13:59.346080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.676 [2024-07-25 00:13:59.346097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.676 [2024-07-25 00:13:59.349676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.676 [2024-07-25 00:13:59.358968] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.676 [2024-07-25 00:13:59.359415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.676 [2024-07-25 00:13:59.359447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.676 [2024-07-25 00:13:59.359466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.676 [2024-07-25 00:13:59.359705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.676 [2024-07-25 00:13:59.359948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.676 [2024-07-25 00:13:59.359974] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.676 [2024-07-25 00:13:59.359990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.676 [2024-07-25 00:13:59.363564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.676 [2024-07-25 00:13:59.372853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.676 [2024-07-25 00:13:59.373318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.676 [2024-07-25 00:13:59.373347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.676 [2024-07-25 00:13:59.373364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.676 [2024-07-25 00:13:59.373628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.676 [2024-07-25 00:13:59.373874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.676 [2024-07-25 00:13:59.373900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.676 [2024-07-25 00:13:59.373917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.676 [2024-07-25 00:13:59.377498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.676 [2024-07-25 00:13:59.386827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.676 [2024-07-25 00:13:59.387312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.676 [2024-07-25 00:13:59.387353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.676 [2024-07-25 00:13:59.387382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.676 [2024-07-25 00:13:59.387664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.676 [2024-07-25 00:13:59.387890] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.676 [2024-07-25 00:13:59.387917] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.676 [2024-07-25 00:13:59.387935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.391557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.400698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.401199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.401235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.401255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.401496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.401740] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.401766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.401782] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.405363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.414653] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.415100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.415141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.415160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.415401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.415645] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.415671] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.415687] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.419264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 939739 Killed "${NVMF_APP[@]}" "$@" 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.936 [2024-07-25 00:13:59.428571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.429018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.429051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.429070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.429318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.429563] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.429590] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.429606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=940692 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 940692 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 940692 ']' 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.936 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.936 [2024-07-25 00:13:59.433221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.442627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.443070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.443113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.443134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.443374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.443620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.443645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.443661] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.447248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.456320] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.456750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.456778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.456795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.457021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.457272] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.457296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.457311] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.460463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.469753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.470234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.470264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.470282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.470522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.470717] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.470737] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.470751] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.473819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.475530] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:53.936 [2024-07-25 00:13:59.475586] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.936 [2024-07-25 00:13:59.482966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.483434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.483476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.483493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.936 [2024-07-25 00:13:59.483722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.936 [2024-07-25 00:13:59.483917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.936 [2024-07-25 00:13:59.483937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.936 [2024-07-25 00:13:59.483950] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.936 [2024-07-25 00:13:59.487001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.936 [2024-07-25 00:13:59.496264] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.936 [2024-07-25 00:13:59.496741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-07-25 00:13:59.496769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.936 [2024-07-25 00:13:59.496785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.497041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.497284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.497311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.497325] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.500504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.509567] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.510032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.510060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.510076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.937 [2024-07-25 00:13:59.510326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.510561] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.510581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.510596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.513770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.523324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.524418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.524462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.524484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.524729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.524976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.525002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.525018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.528624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.537397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.537820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.537854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.537873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.538124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.538363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.538406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.538422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.542048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.544543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.937 [2024-07-25 00:13:59.551518] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.551979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.552015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.552036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.552291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.552551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.552574] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.552590] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.556208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.565591] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.566215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.566250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.566272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.566527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.566776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.566802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.566821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.570468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.579642] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.580082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.580123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.580158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.580389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.580655] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.580681] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.580697] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.584303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.593620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.594082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.594126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.594170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.594414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.594672] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.594699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.594717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.598289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.607606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.608221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.608259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.608281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.608545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.608796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.608822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.608842] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.612403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.621527] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.937 [2024-07-25 00:13:59.621992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.937 [2024-07-25 00:13:59.622024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.937 [2024-07-25 00:13:59.622043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.937 [2024-07-25 00:13:59.622319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.937 [2024-07-25 00:13:59.622582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.937 [2024-07-25 00:13:59.622608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.937 [2024-07-25 00:13:59.622625] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.937 [2024-07-25 00:13:59.626201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.937 [2024-07-25 00:13:59.635499] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.938 [2024-07-25 00:13:59.635935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.938 [2024-07-25 00:13:59.635967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.938 [2024-07-25 00:13:59.635986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.938 [2024-07-25 00:13:59.636239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.938 [2024-07-25 00:13:59.636489] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.938 [2024-07-25 00:13:59.636523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.938 [2024-07-25 00:13:59.636541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.938 [2024-07-25 00:13:59.639994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.938 [2024-07-25 00:13:59.640254] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.938 [2024-07-25 00:13:59.640286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.938 [2024-07-25 00:13:59.640300] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.938 [2024-07-25 00:13:59.640311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.938 [2024-07-25 00:13:59.640321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.938 [2024-07-25 00:13:59.640393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.938 [2024-07-25 00:13:59.640473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.938 [2024-07-25 00:13:59.640475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.938 [2024-07-25 00:13:59.649346] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.938 [2024-07-25 00:13:59.649915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.938 [2024-07-25 00:13:59.649954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:53.938 [2024-07-25 00:13:59.649976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:53.938 [2024-07-25 00:13:59.650227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:53.938 [2024-07-25 00:13:59.650466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.938 [2024-07-25 00:13:59.650491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.938 [2024-07-25 00:13:59.650509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.197 [2024-07-25 00:13:59.653911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.197 [2024-07-25 00:13:59.662882] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.197 [2024-07-25 00:13:59.663419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.197 [2024-07-25 00:13:59.663458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.197 [2024-07-25 00:13:59.663480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.197 [2024-07-25 00:13:59.663732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.197 [2024-07-25 00:13:59.663944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.197 [2024-07-25 00:13:59.663966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.197 [2024-07-25 00:13:59.663983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.197 [2024-07-25 00:13:59.667233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.197 [2024-07-25 00:13:59.676448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.197 [2024-07-25 00:13:59.677063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.197 [2024-07-25 00:13:59.677111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.197 [2024-07-25 00:13:59.677144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.197 [2024-07-25 00:13:59.677371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.197 [2024-07-25 00:13:59.677618] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.197 [2024-07-25 00:13:59.677641] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.197 [2024-07-25 00:13:59.677660] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.197 [2024-07-25 00:13:59.680805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.197 [2024-07-25 00:13:59.690058] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.197 [2024-07-25 00:13:59.690667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.197 [2024-07-25 00:13:59.690709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.197 [2024-07-25 00:13:59.690731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.197 [2024-07-25 00:13:59.690986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.197 [2024-07-25 00:13:59.691245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.197 [2024-07-25 00:13:59.691270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.197 [2024-07-25 00:13:59.691288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.197 [2024-07-25 00:13:59.694514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.197 [2024-07-25 00:13:59.703600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.197 [2024-07-25 00:13:59.704126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.197 [2024-07-25 00:13:59.704162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.197 [2024-07-25 00:13:59.704184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.197 [2024-07-25 00:13:59.704408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.197 [2024-07-25 00:13:59.704648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.197 [2024-07-25 00:13:59.704671] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.197 [2024-07-25 00:13:59.704688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.197 [2024-07-25 00:13:59.707929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.197 [2024-07-25 00:13:59.717181] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.197 [2024-07-25 00:13:59.717767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.197 [2024-07-25 00:13:59.717806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.197 [2024-07-25 00:13:59.717828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.197 [2024-07-25 00:13:59.718081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.197 [2024-07-25 00:13:59.718332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.718377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.718398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.721698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 [2024-07-25 00:13:59.730735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.731125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.731155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.731172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.731390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.731635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.731658] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.731672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.734900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 [2024-07-25 00:13:59.744338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.744747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.744776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.744794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.745023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.745266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.745291] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.745306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.748569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 [2024-07-25 00:13:59.757971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.758380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.758409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.758426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.758669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.758876] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.758903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.758917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.762173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 [2024-07-25 00:13:59.771563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.771977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.772006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.772023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.772250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.772499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.772521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.772535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.775746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 [2024-07-25 00:13:59.783712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.198 [2024-07-25 00:13:59.785132] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.785549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.785578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.785595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.785838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.786045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.786067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.786082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.789325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 [2024-07-25 00:13:59.798678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.799059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.799080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.799321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.799559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.799582] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.799595] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.802783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 [2024-07-25 00:13:59.812204] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.812635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.812664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.812680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.812924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.813158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.813181] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.813195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.816391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 [2024-07-25 00:13:59.825845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.198 [2024-07-25 00:13:59.826452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.198 [2024-07-25 00:13:59.826493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.198 [2024-07-25 00:13:59.826515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.198 [2024-07-25 00:13:59.826765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.198 [2024-07-25 00:13:59.826979] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.198 [2024-07-25 00:13:59.827002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.198 [2024-07-25 00:13:59.827020] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.198 [2024-07-25 00:13:59.830254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.198 Malloc0 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.198 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.199 [2024-07-25 00:13:59.839534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.199 [2024-07-25 00:13:59.839986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.199 [2024-07-25 00:13:59.840015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.199 [2024-07-25 00:13:59.840033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.199 [2024-07-25 00:13:59.840266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.199 [2024-07-25 00:13:59.840499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.199 [2024-07-25 00:13:59.840523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.199 [2024-07-25 00:13:59.840537] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.199 [2024-07-25 00:13:59.843842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.199 [2024-07-25 00:13:59.853187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.199 [2024-07-25 00:13:59.853677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.199 [2024-07-25 00:13:59.853706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8e70 with addr=10.0.0.2, port=4420 00:33:54.199 [2024-07-25 00:13:59.853722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8e70 is same with the state(5) to be set 00:33:54.199 [2024-07-25 00:13:59.853964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8e70 (9): Bad file descriptor 00:33:54.199 [2024-07-25 00:13:59.854184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.199 [2024-07-25 00:13:59.854203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.199 [2024-07-25 00:13:59.854226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.199 [2024-07-25 00:13:59.854241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.199 [2024-07-25 00:13:59.857503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.199 00:13:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 940026 00:33:54.199 [2024-07-25 00:13:59.866787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.457 [2024-07-25 00:13:59.982397] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:04.426 00:34:04.426 Latency(us) 00:34:04.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:04.426 Verification LBA range: start 0x0 length 0x4000 00:34:04.426 Nvme1n1 : 15.00 6597.33 25.77 9231.02 0.00 8062.71 837.40 23787.14 00:34:04.426 =================================================================================================================== 00:34:04.426 Total : 6597.33 25.77 9231.02 0.00 8062.71 837.40 23787.14 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:04.426 rmmod nvme_tcp 00:34:04.426 rmmod nvme_fabrics 00:34:04.426 rmmod nvme_keyring 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 940692 ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 940692 ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 940692' 00:34:04.426 killing process with pid 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 940692 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:04.426 00:14:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.330 00:14:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:06.330 00:34:06.330 real 0m22.339s 00:34:06.330 user 0m59.929s 00:34:06.330 sys 0m4.208s 00:34:06.330 00:14:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:06.330 00:14:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:06.330 ************************************ 00:34:06.330 END TEST nvmf_bdevperf 00:34:06.330 ************************************ 00:34:06.330 00:14:11 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:06.330 00:14:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:06.330 00:14:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:06.330 00:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.330 ************************************ 00:34:06.330 START TEST nvmf_target_disconnect 00:34:06.330 ************************************ 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:06.330 * Looking for test storage... 00:34:06.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.330 00:14:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:06.331 00:14:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.279 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:08.280 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:08.280 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:08.280 Found net devices under 0000:09:00.0: cvl_0_0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:08.280 Found net devices under 0000:09:00.1: cvl_0_1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:08.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:34:08.280 00:34:08.280 --- 10.0.0.2 ping statistics --- 00:34:08.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.280 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:08.280 00:34:08.280 --- 10.0.0.1 ping statistics --- 00:34:08.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.280 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:08.280 ************************************ 00:34:08.280 START TEST nvmf_target_disconnect_tc1 00:34:08.280 ************************************ 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:08.280 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:08.281 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.281 [2024-07-25 00:14:13.933696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.281 [2024-07-25 00:14:13.933777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1655520 with addr=10.0.0.2, port=4420 00:34:08.281 [2024-07-25 00:14:13.933816] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:08.281 [2024-07-25 00:14:13.933843] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:08.281 [2024-07-25 00:14:13.933858] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:08.281 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:08.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:08.281 Initializing NVMe Controllers 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:08.281 00:34:08.281 real 0m0.090s 00:34:08.281 user 0m0.038s 00:34:08.281 sys 0m0.052s 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:08.281 ************************************ 00:34:08.281 END TEST nvmf_target_disconnect_tc1 00:34:08.281 ************************************ 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:08.281 ************************************ 00:34:08.281 START TEST nvmf_target_disconnect_tc2 00:34:08.281 ************************************ 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:08.281 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=943841 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 943841 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 943841 ']' 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:08.538 00:14:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.539 [2024-07-25 00:14:14.035506] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:08.539 [2024-07-25 00:14:14.035594] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.539 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.539 [2024-07-25 00:14:14.100407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:08.539 [2024-07-25 00:14:14.188724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.539 [2024-07-25 00:14:14.188777] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.539 [2024-07-25 00:14:14.188801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.539 [2024-07-25 00:14:14.188811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.539 [2024-07-25 00:14:14.188821] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.539 [2024-07-25 00:14:14.188911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:08.539 [2024-07-25 00:14:14.188976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:08.539 [2024-07-25 00:14:14.189038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:08.539 [2024-07-25 00:14:14.189041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 Malloc0 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 [2024-07-25 00:14:14.352202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 [2024-07-25 00:14:14.380429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.796 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=943880 00:34:08.797 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:08.797 00:14:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:08.797 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.697 00:14:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 943841 00:34:10.697 00:14:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Write completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Write completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Read completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Write completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.697 Write completed with error (sct=0, sc=8) 00:34:10.697 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 [2024-07-25 00:14:16.404567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 [2024-07-25 00:14:16.404890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 [2024-07-25 00:14:16.405207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Write completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.698 Read completed with error (sct=0, sc=8) 00:34:10.698 starting I/O failed 00:34:10.699 Write completed with error (sct=0, sc=8) 00:34:10.699 starting I/O failed 00:34:10.699 Read completed with error (sct=0, sc=8) 00:34:10.699 starting I/O failed 00:34:10.699 Write completed with error (sct=0, sc=8) 00:34:10.699 starting I/O failed 00:34:10.699 [2024-07-25 00:14:16.405549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:10.699 [2024-07-25 00:14:16.405780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.405811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.405980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.406008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.406172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.406200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.406342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.406374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.406537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.406563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.406748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.406774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.407026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.407054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.407227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.407253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.407395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.407436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.407600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.407626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.407794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.407820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.408066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.408108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.408271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.408298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.408434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.408461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.408638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.408673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.408846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.408878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.409114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.409296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.409337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.409543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.409572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.409708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.409736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.409929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.409955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.410093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.410126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.410268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.410295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.410438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.410464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.410629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.410657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.410911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.410966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.411135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.411162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.411311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.411337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.411505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.411531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.411664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.411690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.411880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.411906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.412040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.412068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.412225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.412254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.412409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-25 00:14:16.412466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-25 00:14:16.412639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-25 00:14:16.412677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-25 00:14:16.412829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-25 00:14:16.412856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.413847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.413874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.414883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.414909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.415167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.415208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.415350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.415377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.975 qpair failed and we were unable to recover it. 00:34:10.975 [2024-07-25 00:14:16.415546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.975 [2024-07-25 00:14:16.415589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.415789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.415814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.415964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.415990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.416151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.416177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.416315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.416342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.416487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.416515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.416735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.416760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.416901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.416927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.417118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.417148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.417291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.417318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.417541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.417602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.417816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.417854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.418048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.418213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.418402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.418611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.418795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.418978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.419004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.419175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.419202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.419362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.419404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.419642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.419683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.419822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.419848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.420079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.420127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.420318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.420344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.420504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.420533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.420754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.420778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.420999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.421226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.421413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.421614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.421794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.421973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.421999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.422164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.422190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.422330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.422360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.422526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.422552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.422737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.422763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-07-25 00:14:16.422974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-07-25 00:14:16.423003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.423249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.423276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.423457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.423483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.423649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.423674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.423845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.423871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.424051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.424081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.424282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.424322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.424508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.424536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.424754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.424780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.424937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.424963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.425131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.425157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.425324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.425350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.425510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.425536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.425675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.425702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.425901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.425926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.426106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.426300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.426517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.426677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.426840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.426981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.427146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.427314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.427521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.427717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.427878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.427905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.428067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.428093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.428322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.428348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.428487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.428512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.428696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.428723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.428880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.428923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.429080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.429110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.429276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.429301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.429486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.429512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.429677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.429703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.429860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.429887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-07-25 00:14:16.430086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-07-25 00:14:16.430133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.430306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.430339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.430499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.430525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.430681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.430706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.430866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.430893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.431963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.431989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.432151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.432178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.432333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.432359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.432498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.432524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.432707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.432733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.432923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.432949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.433085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.433119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.433288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.433314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.433464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.433490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.433647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.433672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.433832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.433860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.434098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.434131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.434282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.434307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.434460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.434486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.434650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.434676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.434857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.434882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.435047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.435250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.435463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.435652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.435818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.435975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.436157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.436341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.436486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.436695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-07-25 00:14:16.436881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-07-25 00:14:16.436907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.437062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.437088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.437250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.437276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.437613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.437666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.437808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.437834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.437969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.438161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.438323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.438508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.438706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.438876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.438904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.439088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.439123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.439287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.439313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.439472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.439497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.439734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.439759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.439913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.439940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.440074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.440099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.440310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.440336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.440493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.440519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.440649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.440675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.440834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.440859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.441017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.441044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.441282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.441309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.441520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.441548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.441771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.441796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.441988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.442013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.442174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.442201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.442445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.442474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.442636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.442661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.442823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.442848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.443972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.443998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-07-25 00:14:16.444180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-07-25 00:14:16.444221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.444370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.444399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.444585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.444612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.444769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.444796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.444961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.444987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.445141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.445168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.445339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.445368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.445546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.445572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.445733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.445761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.445950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.446211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.446376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.446565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.446752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.446938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.446968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.447147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.447178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.447331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.447358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.447539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.447565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.447722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.447748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.447933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.447960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.448125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.448152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.448345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.448371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.448532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.448558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.448729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.448770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.448960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.448985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.449204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.449230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.449448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.449490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.449668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.449693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.449817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.449842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-07-25 00:14:16.450029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-07-25 00:14:16.450054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.450218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.450245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.450430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.450457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.450660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.450689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.450874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.450901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.451058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.451084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.451252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.451280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.451475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.451528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.451727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.451756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.451905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.451934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.452141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.452168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.452411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.452436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.452644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.452673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.452857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.452897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.453064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.453089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.453277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.453306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.453512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.453538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.453717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.453742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.453905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.453932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.454091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.454127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.454284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.454315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.454485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.454515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.454812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.454871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.455092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.455294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.455320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.455442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.455467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.455629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.455655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.455860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.455887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.456092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.456127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.456268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.456293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.456549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.456574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.456903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.456966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.457161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.457203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.457401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.457443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.457691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.457716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.457888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-07-25 00:14:16.457914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-07-25 00:14:16.458069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.458099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.458311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.458340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.458527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.458553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.458744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.458773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.458984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.459166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.459376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.459528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.459680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.459959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.459984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.460167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.460196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.460406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.460432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.460563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.460588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.460827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.460852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.461030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.461055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.461186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.461213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.461341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.461368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.461586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.461612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.461791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.461820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.462931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.462961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.463123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.463149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.463284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.463310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.463459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.463485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.463635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.463660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.463827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.463853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.464018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.464044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.464209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.464262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.464438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.464466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.464641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.464668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.464805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.464831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.465015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-07-25 00:14:16.465042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-07-25 00:14:16.465202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.465230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.465425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.465451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.465594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.465620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.465754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.465780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.465937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.465963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.466097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.466129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.466335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.466361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.466575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.466616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.466780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.466806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.466961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.466986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.467205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.467232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.467395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.467421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.467624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.467653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.467855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.467882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.468023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.468051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.468243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.468283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.468448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.468479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.468660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.468686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.468846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.468872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.469051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.469227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.469443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.469650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.469834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.469991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.470017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.470230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.470270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.470436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.470464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.470644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.470670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.470815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.470841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.470977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.471146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.471336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.471602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.471815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.471970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.471997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.472150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.472176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-07-25 00:14:16.472339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-07-25 00:14:16.472366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.472557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.472583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.472719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.472746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.472906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.472932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.473117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.473146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.473297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.473325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.473566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.473633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.473919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.473947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.474129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.474156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.474364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.474394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.474582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.474613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.474817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.474844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.475951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.475977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.476126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.476153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.476335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.476368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.476586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.476612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.476800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.476826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.476990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.477018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.477225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.477252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.477434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.477463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.477670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.477696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.477875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.477901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.478070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.478099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.478365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.478391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.478551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.478577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.478716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.478742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.478878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.478905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.479025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.479050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.479219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.479247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.479406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.479435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.479649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.479675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-07-25 00:14:16.479921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-07-25 00:14:16.479950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.480129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.480172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.480307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.480332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.480467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.480494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.480769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.480820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.480994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.481179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.481371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.481534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.481723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.481931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.481960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.482145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.482173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.482303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.482330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.482479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.482505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.482688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.482714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.482929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.482956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.483133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.483177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.483341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.483367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.483564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.483592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.483763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.483791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.483966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.483991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.484180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.484221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.484347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.484373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.484554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.484584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.484789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.484817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.484990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.485215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.485404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.485558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.485742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.485923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.485965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.486140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-07-25 00:14:16.486169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-07-25 00:14:16.486412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.486437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.486645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.486673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.486839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.486867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.487067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.487092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.487261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.487289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.487442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.487467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.487623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.487649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.487830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.487855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.488042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.488253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.488477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.488677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.488830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.488989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.489015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.489150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.489176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.489358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.489383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.489561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.489590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.489788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.489816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.490061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.490087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.490335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.490363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.490549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.490575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.490755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.490781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.490916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.490941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.491100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.491133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.491318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.491343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.491493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.491518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.491679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.491705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.491862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.491887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.492092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.492135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.492325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.492351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.492512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.492537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.492751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.492784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.492940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.492968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.493150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.493176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.493333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.493359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.493515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-07-25 00:14:16.493556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-07-25 00:14:16.493743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.493769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.493952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.493978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.494125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.494151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.494309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.494336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.494536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.494565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.494727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.494755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.494943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.494970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.495148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.495177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.495353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.495381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.495534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.495560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.495756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.495784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.495970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.495996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.496166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.496204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.496368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.496394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.496551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.496576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.496729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.496754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.496964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.496992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.497199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.497225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.497467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.497493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.497650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.497675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.497837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.497862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.497994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.498021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.498211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.498241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.498451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.498477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.498632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.498658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.498821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.498846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.498983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.499010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.499163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.499190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.499350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.499375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.499611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.499638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.499830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.499856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.500037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.500067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.500323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.500350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.500537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.500562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.500768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.500797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.500976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.501007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-07-25 00:14:16.501148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-07-25 00:14:16.501175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.501314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.501339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.501523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.501548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.501682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.501708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.501871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.501897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.502068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.502096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.502278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.502304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.502488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.502514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.502751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.502777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.502972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.502998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.503198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.503227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.503428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.503453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.503636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.503662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.503840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.503869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.504043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.504070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.504229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.504255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.504460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.504488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.504668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.504694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.504881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.504907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.505077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.505115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.505308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.505333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.505464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.505489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.505653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.505678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.505808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.505833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.506016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.506042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.506234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.506264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.506481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.506507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.506677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.506703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.506885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.506910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.507084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.507119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.507291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.507316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.507449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.507492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.507670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.507700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.507874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.507901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.508083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.508127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.508306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.508335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-07-25 00:14:16.508507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-07-25 00:14:16.508533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.508693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.508719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.508871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.508914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.509096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.509133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.509274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.509301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.509463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.509488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.509645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.509670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.509807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.509832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.510088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.510126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.510323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.510349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.510532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.510557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.510756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.510782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.510972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.511180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.511207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.511361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.511386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.511578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.511604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.511741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.511767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.511953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.511979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.512162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.512189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.512315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.512341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.512502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.512528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.512681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.512706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.512860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.512885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.513015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.513059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.513248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.513274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.513454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.513482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.513679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.513705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.513867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.513892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.514056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.514082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.514256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.514282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.514472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.514498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.514628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.514654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.514805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.514831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.515009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.515034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.515186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.515213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-07-25 00:14:16.515337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-07-25 00:14:16.515363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.515490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.515517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.515674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.515718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.515864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.515893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.516165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.516191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.516366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.516409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.516610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.516638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.516819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.516845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.516999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.517036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.517245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.517271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.517435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.517461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.517604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.517629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.517789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.517815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.518142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.518352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.518501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.518667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.518851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.518877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.519034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.519061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.519216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.519242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.519376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.519404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.519668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.519694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.519881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.519907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.520059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.520085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.520245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.520271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.520453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.520481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.520629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.520659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.520863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.520889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.521037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.521065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.521246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.521273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.521453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.521479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.521607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.521634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-07-25 00:14:16.521788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-07-25 00:14:16.521828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.522033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.522229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.522439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.522646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.522833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.522986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.523012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.523200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.523226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.523410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.523435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.523597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.523623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.523857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.523883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.524085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.524129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.524309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.524339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.524523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.524548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.524689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.524714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.524924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.524954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.525194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.525221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.525428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.525457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.525604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.525632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.525835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.525860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.526951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.526977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.527154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.527183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.527357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.527385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.527558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.527584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.527744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.527770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.527919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.527944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.528080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.528122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.528298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.528326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.528505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.528533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.528709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.528735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.528894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.528919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-07-25 00:14:16.529090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-07-25 00:14:16.529126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.529326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.529351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.529529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.529557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.529740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.529766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.529923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.529967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.530159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.530186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.530350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.530376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.530555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.530580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.530760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.530789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.530960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.530986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.531128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.531154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.531283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.531326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.531528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.531555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.531736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.531762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.531920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.531946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.532085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.532142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.532306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.532332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.532517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.532542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.532731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.532756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.532939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.532969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.533116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.533143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.533303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.533329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.533499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.533525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.533657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.533682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.533845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.533888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.534067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.534093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.534266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.534294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.534451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.534477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.534638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.534664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.534792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.534819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.535946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.535972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-07-25 00:14:16.536110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-07-25 00:14:16.536138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.536323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.536349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.536564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.536590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.536728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.536754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.536879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.536905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.537058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.537084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.537330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.537356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.537511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.537536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.537698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.537724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.537856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.537883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.538018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.538049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.538255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.538284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.538438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.538464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.538631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.538657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.538814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.538842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.539018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.539048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.539235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.539261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.539400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.539426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.539585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.539630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.539832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.539859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.540043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.540071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.540241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.540268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.540451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.540477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.540633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.540659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.540831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.540860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.541036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.541061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.541248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.541274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.541472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.541498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.541635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.541662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.541825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.541851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.542034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.542060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.542228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.542254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.542400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.542429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.542580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.542610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.542791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.542817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.543021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-07-25 00:14:16.543049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-07-25 00:14:16.543211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.543238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.543396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.543422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.543625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.543653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.543821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.543846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.544910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.544935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.545098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.545130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.545271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.545297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.545446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.545471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.545655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.545680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.545829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.545863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.546033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.546062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.546257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.546283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.546485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.546514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.546655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.546685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.546842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.546868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.547025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.547052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.547231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.547258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.547416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.547443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.547591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.547617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.547794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.547822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-07-25 00:14:16.548943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-07-25 00:14:16.548971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.549147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.549174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.549333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.549359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.549542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.549567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.549750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.549776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.549904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.549929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.550114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.550140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.550301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.550326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.550483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.550509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.550692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.550722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.550934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.550959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.551127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.551154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.551325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.551353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.551531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.551556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.551694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.551720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.551868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.551893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.552025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.552050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.552202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.552227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.552409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.552439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.552617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.552643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.552792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.552821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.553024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.553050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.553234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.553261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.553455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.553483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.553684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.553718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.553872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.553897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.554062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.554113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.554281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.554310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.554488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.554515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.554661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.554690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.554868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.554894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.555085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.555119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.555284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.555309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.555491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.555520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.555762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.555788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.555981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.556006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-07-25 00:14:16.556168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-07-25 00:14:16.556212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.556361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.556387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.556583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.556609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.556788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.556816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.556956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.556981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.557217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.557243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.557415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.557444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.557646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.557671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.557845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.557874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.558964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.558990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.559247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.559273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.559414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.559439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.559577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.559620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.559814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.559840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.560941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.560970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.561178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.561340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.561499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.561654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.561832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.561997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.562186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.562368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.562536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.562748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-07-25 00:14:16.562921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-07-25 00:14:16.562964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.563155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.563181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.563342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.563367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.563519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.563549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.563738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.563764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.563922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.563948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.564140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.564167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.564333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.564377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.564559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.564586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.564738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.564763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.564892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.564917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.565075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.565110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.565311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.565340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.565524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.565550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.565731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.565756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.565925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.565953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.566140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.566166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.566308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.566334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.566473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.566500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.566677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.566705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.566870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.566896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.567044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.567070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.567235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.567262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.567447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.567473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.567633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.567658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.567841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.567869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.568114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.568140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.568328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.568356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.568552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.568580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.568779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.568804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.568967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.568992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.569203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.569229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.569390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.569415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.569619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.569653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.569846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.569872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.570053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.570078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.570268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-07-25 00:14:16.570294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-07-25 00:14:16.570473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.570502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.570688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.570713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.570919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.570947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.571127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.571153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.571289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.571316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.571517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.571546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.571726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.571752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.571937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.571964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.572127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.572154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.572281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.572307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.572470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.572495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.572625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.572669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.572832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.572860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.573046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.573071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.573242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.573269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.573446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.573476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.573656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.573682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.573863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.573888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.574055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.574084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.574294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.574320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.574502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.574528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.574686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.574711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.574892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.574918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.575180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.575207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.575388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.575417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.575589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.575615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.575819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.575848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.576080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.576276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.576471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.576629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.576834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.576995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.577020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.577159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.577185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.577351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.577376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.577534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-07-25 00:14:16.577559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-07-25 00:14:16.577721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.577750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.577932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.577960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.578156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.578185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.578361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.578387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.578519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.578544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.578666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.578691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.578851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.578892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.579089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.579259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.579450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.579645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.579833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.579991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.580208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.580390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.580579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.580766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.580954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.580981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.581140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.581167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.581327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.581353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.581486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.581511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.581672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.581698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.581853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.581878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.582037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.582063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.582277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.582306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.582448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.582479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.582691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.582717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.582883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.582910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.583036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.583063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.583256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.583283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.583446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.583472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-07-25 00:14:16.583660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-07-25 00:14:16.583685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.583881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.583908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.584090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.584123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.584283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.584327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.584497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.584522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.584684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.584710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.584890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.584915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.585072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.585097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.585238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.585263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.585420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.585450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.585617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.585659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.585832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.585860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.586940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.586965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.587120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.587164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.587342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.587370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.587529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.587555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.587716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.587757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.587909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.587936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.588118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.588163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.588309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.588334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.588515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.588540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.588819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.588880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.589150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.589176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.589337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.589362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.589528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.589554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.589733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.589759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.589882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.589908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.590066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.590091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.590277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.590306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.590481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.590509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.590695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.590721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.000 [2024-07-25 00:14:16.590912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.000 [2024-07-25 00:14:16.590938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.000 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.591141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.591171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.591341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.591369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.591558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.591583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.591741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.591768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.591929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.591956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.592077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.592120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.592306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.592332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.592491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.592517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.592701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.592726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.592898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.592927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.593098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.593130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.593381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.593412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.593587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.593620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.593875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.593926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.594128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.594157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.594339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.594366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.594563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.594593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.594808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.594869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.595062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.595100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.595269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.595296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.595459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.595485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.595657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.595688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.595865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.595895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.596131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.596158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.596322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.596348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.596549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.596576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.596783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.596811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.597047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.597075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.597248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.597275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.597439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.597465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.597628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.597654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.597807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.597833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.598034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.598062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.598289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd20f0 is same with the state(5) to be set 00:34:11.001 [2024-07-25 00:14:16.598556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.598601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.598796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.001 [2024-07-25 00:14:16.598824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.001 qpair failed and we were unable to recover it. 00:34:11.001 [2024-07-25 00:14:16.598988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.599169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.599359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.599561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.599777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.599952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.599977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.600119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.600171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.600371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.600406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.600614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.600640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.600802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.600828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.601027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.601055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.601257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.601283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.601457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.601485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.601747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.601798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.601979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.602005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.602204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.602231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.602392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.602418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.602570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.602600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.602802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.602831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.603967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.603993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.604169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.604196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.604329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.604355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.604517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.604561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.604762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.604790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.604996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.605022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.605162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.605189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.605378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.605410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.605571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.605598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.605740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.605766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.605971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.606000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.606163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.606190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.606340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.002 [2024-07-25 00:14:16.606393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.002 qpair failed and we were unable to recover it. 00:34:11.002 [2024-07-25 00:14:16.606531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.606561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.606744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.606770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.606928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.606971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.607187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.607228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.607395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.607423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.607586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.607612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.607805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.607831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.607967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.607999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.608142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.608168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.608323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.608349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.608542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.608567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.608701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.608727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.608889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.608917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.609136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.609163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.609296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.609322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.609564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.609615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.609798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.609824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.610036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.610213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.610409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.610572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.610808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.610986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.611012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.611172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.611199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.611363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.611415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.611624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.611649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.611775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.611800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.611962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.612004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.612196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.612223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.612383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.612429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.612635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.612664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.612813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.612838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.612998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.613024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.613209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.613236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.613393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.613418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.613608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.613650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.613886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.003 [2024-07-25 00:14:16.613940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.003 qpair failed and we were unable to recover it. 00:34:11.003 [2024-07-25 00:14:16.614122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.614149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.614305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.614330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.614545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.614573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.614743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.614769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.614904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.614930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.615089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.615126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.615310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.615336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.615494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.615520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.615671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.615697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.615881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.615907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.616114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.616157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.616360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.616410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.616606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.616634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.616793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.616822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.616992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.617227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.617400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.617586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.617772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.617936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.617977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.618159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.618190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.618366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.618392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.618526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.618551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.618777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.618835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.619019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.619050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.619213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.619239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.619429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.619454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.619637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.619663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.619857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.619886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.620090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.620127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.620286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.620313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.620472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-07-25 00:14:16.620498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.004 qpair failed and we were unable to recover it. 00:34:11.004 [2024-07-25 00:14:16.620676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.620759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.620941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.620966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.621139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.621185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.621353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.621380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.621540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.621566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.621725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.621751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.621930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.621960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.622165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.622192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.622331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.622358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.622493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.622520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.622706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.622732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.622868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.622895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.623048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.623075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.623244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.623270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.623429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.623457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.623634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.623662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.623810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.623835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.624016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.624041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.624281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.624307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.624477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.624504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.624711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.624740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.624913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.624941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.625092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.625125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.625306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.625332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.625519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.625547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.625757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.625783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.625945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.625972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.626152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.626196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.626357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.626383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.626619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.626645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.626956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.627011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.627223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.627249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.627398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.627434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.627641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.627669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.627831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.627857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.627995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.005 [2024-07-25 00:14:16.628022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.005 qpair failed and we were unable to recover it. 00:34:11.005 [2024-07-25 00:14:16.628234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.628260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.628392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.628419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.628621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.628649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.628855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.628883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.629090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.629121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.629248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.629274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.629438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.629464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.629624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.629650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.629832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.629860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.630048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.630073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.630220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.630247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.630414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.630440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.630615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.630645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.630845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.630871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.631040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.631070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.631296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.631323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.631503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.631528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.631703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.631731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.631879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.631909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.632083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.632119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.632306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.632335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.632499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.632528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.632722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.632747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.632904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.632933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.633115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.633141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.633301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.633327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.633473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.633503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.633648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.633678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.633885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.633911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.634069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.634099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.634302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.634328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.634503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.634530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.634690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.634717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.634852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.634877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.635035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.635060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.635197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.006 [2024-07-25 00:14:16.635224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-25 00:14:16.635389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.635415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.635605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.635631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.635813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.635842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.636027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.636055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.636243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.636269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.636439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.636468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.636663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.636691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.636863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.636888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.637066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.637094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.637283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.637312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.637496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.637522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.637688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.637714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.637853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.637879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.638034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.638061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.638264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.638294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.638462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.638490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.638699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.638724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.638919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.638945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.639110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.639138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.639322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.639348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.639484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.639509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.639692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.639734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.639936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.639962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.640119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.640163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.640348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.640374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.640527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.640553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.640719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.640745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.640904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.640936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.641251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.641278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.641455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.641481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.641638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.641663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.641795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.641822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.642023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.642052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.642230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.642256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.642416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.642442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-25 00:14:16.642605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.007 [2024-07-25 00:14:16.642630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.642804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.642833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.643000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.643025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.643202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.643231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.643401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.643430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.643604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.643630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.643847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.643876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.644059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.644088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.644255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.644282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.644464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.644490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.644673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.644701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.644884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.644909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.645070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.645096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.645239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.645265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.645418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.645444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.645646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.645675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.645847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.645875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.646124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.646151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.646332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.646360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.646507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.646537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.646725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.646751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.646915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.646941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.647070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.647097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.647298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.647323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.647498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.647527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.647706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.647734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.647891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.647932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.648133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.648176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.648366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.648409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.648567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.648593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.648730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.648755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.648954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.648982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.649156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.649187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.649331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.649357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.649493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.649520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.649682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.649707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-25 00:14:16.649867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.008 [2024-07-25 00:14:16.649910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.650089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.650128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.650336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.650361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.650550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.650579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.650721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.650750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.650922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.650948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.651087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.651125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.651334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.651363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.651539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.651565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.651807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.651849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.652037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.652065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.652256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.652282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.652466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.652495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.652672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.652700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.652905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.652931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.653064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.653091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.653301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.653331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.653573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.653598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.653805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.653833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.654055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.654212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.654446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.654625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.654840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.654996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.655035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.655221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.655250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.655426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.655453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.655613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.655639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.655797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.655823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.655972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.656001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.656187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.656216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-25 00:14:16.656374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.009 [2024-07-25 00:14:16.656401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.656567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.656593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.656772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.656798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.657015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.657043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.657233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.657260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.657396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.657447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.657628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.657656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.657852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.657878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.658041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.658067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.658331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.658357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.658529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.658556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.658740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.658768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.658951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.658979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.659140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.659167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.659341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.659369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.659550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.659582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.659739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.659765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.659949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.659978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.660155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.660185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.660372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.660409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.660558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.660591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.660765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.660794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.660953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.660989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.661127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.661170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.661347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.661378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.661560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.661585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.661794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.661822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.661988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.662028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.662193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.662219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.662354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.662381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.662559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.662603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.662787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.662813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.662993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.663028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.663218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.663247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.663417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.663443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.663622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.663650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.663810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.010 [2024-07-25 00:14:16.663847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-25 00:14:16.664008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.664038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.664235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.664262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.664398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.664425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.664602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.664628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.664842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.664871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.665039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.665068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.665230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.665257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.665459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.665487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.665658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.665691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.665892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.665918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.666047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.666088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.666316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.666342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.666469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.666494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.666677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.666702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.666887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.666916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.667075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.667118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.667295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.667324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.667525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.667700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.667727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.667932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.667960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.668130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.668160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.668317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.668343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.668485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.668511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.668684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.668728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.668889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.668914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.669050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.669076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.669228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.669256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.669393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.669424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.669588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.669644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.669794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.669823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.670067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.670113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.670270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.670296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.670458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.670488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.670670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.670698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.670876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.670905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.011 [2024-07-25 00:14:16.671097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.011 [2024-07-25 00:14:16.671131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.011 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.671258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.671284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.671487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.671525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.671719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.671752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.671947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.671983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.672158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.672195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.672830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.672874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.673113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.673142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.673352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.673404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.673616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.673645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.012 [2024-07-25 00:14:16.673814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.012 [2024-07-25 00:14:16.673840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.012 qpair failed and we were unable to recover it. 00:34:11.280 [2024-07-25 00:14:16.674003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-07-25 00:14:16.674032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-07-25 00:14:16.674233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-07-25 00:14:16.674260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-07-25 00:14:16.674391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-07-25 00:14:16.674422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-07-25 00:14:16.674588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-07-25 00:14:16.674627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-07-25 00:14:16.674804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-07-25 00:14:16.674842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.675042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.675082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.675236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.675264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.675442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.675470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.675630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.675672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.675940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.675984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.676152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.676180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.676334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.676378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.676548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.676597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.676835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.676886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.677024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.677050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.677218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.677263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.677440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.677495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.677687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.677735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.677900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.677926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.678087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.678125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.678285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.678327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.678514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.678557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.678749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.678778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.678983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.679010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.679173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.679218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.679372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.679412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.679616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.679660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.679836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.679863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.679997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.680025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.680242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.680288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.680503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.680549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.680774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.680831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.680967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.680993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.681131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.681158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.681319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.681363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.681574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.681601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.681761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.681787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.681921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.681947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.682107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.682134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.682281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.682307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.682478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.682506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.682665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.682691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.682829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.682860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.683009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.683050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.683252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.683283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.683436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.683464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.683645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.683687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.683837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.683866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.684032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.684058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.684232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.684259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.684385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.684420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.684607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.684643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.684859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.684920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.685082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.685131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.685281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.685307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.685474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.685500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.685730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.685779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.685955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.685984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.686146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.686173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.686312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.686338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.686540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.686606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.686887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.686946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.687111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.687159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.687301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.687327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.687473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.687516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.687758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.687812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.687986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.688014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.688182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.688208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.688389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.688423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.688629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.688660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.688842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.688870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.689032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.689059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.689201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.689227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.689364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.689390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.689586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.689632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.689791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.689833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.690070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.690117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.690302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.690329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.690464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.690506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.690711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.690756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.690929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.690958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.691118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.691146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.691310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.691337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.691529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.691576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.691821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.691853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.692048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.692075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-07-25 00:14:16.692249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-07-25 00:14:16.692275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.692430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.692458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.692623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.692652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.692824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.692869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.693031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.693059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.693262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.693303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.693465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.693504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.693719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.693769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.693947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.693977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.694140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.694167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.694302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.694328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.694577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.694624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.694862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.694908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.695060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.695088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.695259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.695287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.695418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.695445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.695596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.695636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.695829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.695859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.696064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.696090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.696231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.696257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.696395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.696429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.696603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.696649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.697501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.697541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.697800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.697830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.698084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.698123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.698279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.698305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.698478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.698528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.698708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.698751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.698931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.698961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.699170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.699197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.699334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.699360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.699528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.699572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.699786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.699816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.700006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.700035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.700202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.700229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.700361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.700387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.700553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.700581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.700823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.700852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.701006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.701035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.701209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.701235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.701371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.701397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.701575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.701603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.701805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.701837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.702047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.702074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.702230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.702257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.702400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.702433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.702635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.702665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.702880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.702928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.703081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.703127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.703285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.703311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.703446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.703472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.703685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.703737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.703911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.703939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.704143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.704179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.704318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.704345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.704523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.704553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.704765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.704813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.704968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.704997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.705186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.705213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.705372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.705421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.705577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.705605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.705773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.705802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.705943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.705972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.706147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.706173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.706306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.706332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.706574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.706617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.706794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.706824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.707022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.707051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.707245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.707271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.707406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.707436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.707579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.707620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.707830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.707886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.708071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.708254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.708436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.708598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.708826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.708993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.709022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.709207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.709234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.709369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-07-25 00:14:16.709395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-07-25 00:14:16.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.709581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.709756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.709786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.710028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.710057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.710227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.710254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.710388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.710425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.710614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.710648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.710825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.710853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.711027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.711055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.711230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.711256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.711387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.711422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.711612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.711638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.711778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.711804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.712000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.712033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.712230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.712257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.712400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.712426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.712619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.712648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.712850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.712882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.713063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.713092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.713281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.713307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.713500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.713547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.713696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.713724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.713921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.713949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.714155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.714182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.714315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.714342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.714541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.714573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.714827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.714856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.715018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.715047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.715222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.715248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.715400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.715430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.715586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.715615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.715844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.715872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.716033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.716066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.716266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.716298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.716427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.716453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.716610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.716638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.716833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.716861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.717009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.717034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.717205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.717231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.717391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.717419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.717593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.717627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.717902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.717955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.718132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.718157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.718315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.718344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.718534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.718561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.718721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.718749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.718910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.718936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.719078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.719120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.719301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.719329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.719497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.719543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.719775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.719802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.719954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.719979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.720192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.720222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.720419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.720446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.720630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.720658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.720837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.720863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.720997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.721022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.721234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.721264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.721436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.721464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.721703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.721752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.721919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.721944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.722085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.722129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.722281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.722309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.722514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.722543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.722714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.722743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.722900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.722930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.723097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.723130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.723338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.723367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.723585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.723632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.723815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.723841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.724023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.724049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.724245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.724275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.724514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.724554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.724765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.724803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.724980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.725006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.725185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-07-25 00:14:16.725213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-07-25 00:14:16.725416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.725445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.725668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.725696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.725841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.725866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.726023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.726049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.726243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.726271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.726452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.726483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.726671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.726718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.726872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.726897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.727040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.727065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.727253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.727282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.727449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.727477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.727689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.727717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.727866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.727891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.728830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.728855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.729864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.729906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.730154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.730312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.730337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.730469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.730506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.730656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.730685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.730876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.730904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.731097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.731293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.731495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.731641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.731840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.731990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.732019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.732215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.732242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.732395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.732435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.732585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.732613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.732765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.732790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.732966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.733012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.733189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.733217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.733415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.733440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.733592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.733619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.733769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.733797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.733982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.734152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.734354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.734518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.734675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.734845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.734869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.735862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.735887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.736050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.736077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.736232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.736258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.736439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.736466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.736654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.736680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.736833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.736861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.737971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.737999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.738182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.738208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.738344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.738369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.738552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.738580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.738779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.738804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.738954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.738979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.739161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.739195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.739375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.739403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.739560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.739585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.739722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.739765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.739945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.739973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.740146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.740172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.740303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.740329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-07-25 00:14:16.740460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-07-25 00:14:16.740485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.740658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.740683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.740872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.740897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.741084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.741138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.741292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.741318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.741512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.741537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.741706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.741734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.741888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.741913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.742132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.742160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.742315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.742341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.742506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.742531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.742705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.742743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.742901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.742929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.743113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.743157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.743321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.743348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.743529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.743578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.743768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.743793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.743968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.743996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.744189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.744215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.744359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.744384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.744530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.744555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.744697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.744723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.744905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.744929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.745113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.745288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.745471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.745633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.745791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.745997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.746173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.746355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.746526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.746730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.746906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.746933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.747133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.747286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.747490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.747680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.747842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.747991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.748184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.748345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.748533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.748694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.748874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.748902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.749063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.749091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.749295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.749320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.749534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.749582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.749771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.749821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.749968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.749994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.750163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.750189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.750326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.750351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.750625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.750649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.750880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.750925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.751096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.751129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.751309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.751335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.751516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.751544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.751693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.751722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.751883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.751908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.752974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.752998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.753154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.753196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.753352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.753378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.753515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.753542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.753675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.753702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.753860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.753889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.754068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.754093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.754235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.754261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.754386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.754411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.754633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.754658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.754839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.754866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-07-25 00:14:16.755060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-07-25 00:14:16.755088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.755251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.755276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.755472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.755496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.755669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.755697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.755878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.755904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.756067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.756122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.756302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.756327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.756451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.756476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.756656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.756681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.756808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.756856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.757010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.757035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.757198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.757224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.757388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.757413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.757598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.757623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.757813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.757862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.758004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.758032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.758210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.758235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.758424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.758452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.758622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.758655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.758833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.758858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.759872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.759900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.760079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.760121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.760272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.760297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.760436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.760460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.760639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.760667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.760820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.760849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.761047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.761244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.761423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.761631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.761839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.761997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.762025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.762203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.762229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.762414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.762441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.762618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.762645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.762831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.762857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.762994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.763184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.763391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.763551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.763703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.763862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.763887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.764062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.764095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.764287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.764312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.764447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.764474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.764618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.764643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.764801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.764844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.765939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.765976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.766122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.766148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.766308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.766332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.766488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.766512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.766672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.766697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.766873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.766915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.767142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.767296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.767454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.767611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.767835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.767989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.768201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.768359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.768549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.768779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.768961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.768986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.769148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.769191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.769335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.769362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.769522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.769547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.769721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.769746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.769923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-07-25 00:14:16.769951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-07-25 00:14:16.770150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.770175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.770360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.770537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.770566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.770734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.770759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.770917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.770942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.771131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.771157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.771299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.771326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.771514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.771543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.771737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.771765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.771946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.771971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.772108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.772152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.772324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.772352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.772506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.772540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.772690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.772732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.772892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.772917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.773091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.773260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.773415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.773611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.773824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.773997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.774025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.774200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.774225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.774363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.774389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.774553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.774594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.774797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.774822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.775000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.775027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.775186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.775219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.775424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.775449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.775629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.775657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.775829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.775857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.776006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.776031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.776194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.776236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.776431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.776459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.776648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.776673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.776845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.776873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.777048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.777076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.777242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.777268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.777434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.777477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.777652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.777680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.777833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.777858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.778945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.778970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.779140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.779169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.779349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.779375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.779552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.779577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.779731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.779757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.779942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.779967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.780129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.780154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.780315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.780340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.780479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.780504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.780661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.780685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.780840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.780882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.781064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.781281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.781488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.781665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.781842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.781996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.782037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.782191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.782221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.782380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.782406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.782557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.782596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.782776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.782801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.782984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.783179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.783365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.783567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.783760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.783933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-07-25 00:14:16.783960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-07-25 00:14:16.784119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.784144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.784301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.784326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.784513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.784541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.784690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.784715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.784842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.784868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.785055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.785082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.785274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.785300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.785470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.785498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.785692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.785719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.785852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.785878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.786037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.786062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.786228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.786253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.786410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.786435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.786624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.786651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.786849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.786877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.787082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.787120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.787269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.787298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.787503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.787530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.787680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.787705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.787863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.787903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.788040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.788067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.788232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.788257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.788394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.788419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.788619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.788647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.788801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.788826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.789026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.789053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.789246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.789272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.789453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.789477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.789655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.789682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.789878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.789906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.790060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.790086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.790275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.790304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.790452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.790479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.790662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.790687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.790815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.790845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.791005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.791032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.791211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.791237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.791415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.791443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.791644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.791671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.791875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.791900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.792080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.792112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.792273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.792297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.792450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.792475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.792678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.792705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.792869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.792896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.793071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.793095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.793277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.793304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.793479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.793507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.793686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.793711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.793849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.793874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.794913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.794938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.795057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.795082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.795294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.795319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.795465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.795493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.795637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.795664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.795837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.795862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.796069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.796273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.796467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.796623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.796815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.796989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.797142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.797366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.797570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.797717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.797958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.797986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.798160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.798186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.798368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.798412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.798597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.798625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.798807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.798832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.799028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.799056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.799267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.799292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.799452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.799477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.799737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.799765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.799948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.799972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.800105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-07-25 00:14:16.800131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-07-25 00:14:16.800267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.800294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.800485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.800511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.800701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.800725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.800881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.800909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.801063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.801091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.801272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.801297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.801459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.801484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.801618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.801659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.801831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.801857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.802031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.802059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.802248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.802273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.802439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.802464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.802650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.802679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.802827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.802855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.803953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.803981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.804172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.804197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.804328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.804354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.804508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.804536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.804751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.804776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.804956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.804985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.805159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.805187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.805364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.805389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.805567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.805596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.805797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.805824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.806931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.806972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.807119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.807144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.807301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.807326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.807511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.807540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.807726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.807751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.807889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.807913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.808072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.808098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.808272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.808297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.808432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.808458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.808616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.808657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.808811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.808837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.809008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.809036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.809223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.809252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.809429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.809455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.809632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.809660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.809842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.809868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.810053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.810077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.810267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.810295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.810463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.810490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.810668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.810694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.810899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.810926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.811072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.811099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.811298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.811323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.811510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.811537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.811713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.811740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.811958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.811983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.812142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.812172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.812372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.812400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.812576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.812601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.812783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.812808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.812981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.813171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.813355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.813551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.813707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.813863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.813889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.814046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.814071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.814233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.814258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.814413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.814441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.814620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.814648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.814856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.814881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.815059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.815088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.815258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.815287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.815461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.815486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.815665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-07-25 00:14:16.815690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-07-25 00:14:16.815889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.815916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.816158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.816184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.816381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.816409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.816582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.816610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.816784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.816808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.816964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.816988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.817125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.817151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.817314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.817339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.817481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.817505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.817661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.817685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.817919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.817944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.818126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.818154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.818319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.818348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.818494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.818519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.818693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.818718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.818852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.818876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.819066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.819275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.819485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.819644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.819832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.819963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.820004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.820195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.820220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.820362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.820387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.820542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.820584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.820790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.820815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.820997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.821223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.821407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.821565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.821748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.821902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.821927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.822051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.822076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.822239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.822269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.822431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.822456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.822631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.822659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.822828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.822856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.823015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.823041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.823240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.823269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.823464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.823492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.823702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.823727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.823859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.823883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.824036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.824076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.824240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.824265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.824432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.824460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.824624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.824648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.824801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.824826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.825006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.825034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.825237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.825263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.825424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.825448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.825582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.825609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.825804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.825830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.826037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.826064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.826240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.826265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.826442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.826470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.826616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.826641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.826794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.826834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.827970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.827998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.828166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.828193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.828348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.828373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.828521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.828545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.828728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.828755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.828905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.828930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.829109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.829137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.829319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.829346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.829504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.829529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.829689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.829713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.829914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.829941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.830953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.830978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.831174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.831200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.831332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.831359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.831565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-07-25 00:14:16.831591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-07-25 00:14:16.831773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.831798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.831995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.832022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.832200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.832228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.832410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.832435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.832635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.832663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.832871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.832896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.833055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.833081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.833293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.833321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.833492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.833519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.833677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.833704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.833866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.833891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.834023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.834048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.834171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.834199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.834357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.834382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.834597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.834624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.834812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.834837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.835974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.835999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.836203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.836231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.836402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.836429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.836632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.836657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.836793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.836818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.837040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.837222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.837374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.837615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.837820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.837973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.838005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.838190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.838216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.838400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.838425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.838606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.838634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.838800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.838828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.838977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.839163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.839353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.839527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.839706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.839932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.839957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.840133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.840159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.840360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.840388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.840558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.840586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.840775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.840801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.841031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.841214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.841369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.841525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.841807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.841988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.842012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.842197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.842222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.842419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.842444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.842581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.842605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.842807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.842834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.843011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.843039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.843240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.843266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.843429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.843455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.843654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.843682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.843835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.843860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.844060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.844087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.844274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.844299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.844454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.844479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.844609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.844634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.844818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.845027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.845055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.845266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.845291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.845443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.845472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.845659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.845684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.845812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.845837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.846092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.846125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.846339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.846364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.846537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.846565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.846748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.846773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-07-25 00:14:16.846926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-07-25 00:14:16.846951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.847151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.847180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.847350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.847375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.847540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.847565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.847746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.847774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.847938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.847965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.848152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.848178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.848346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.848374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.848572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.848811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.848836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.849043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.849241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.849414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.849611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.849835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.849989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.850014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.850197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.850222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.850463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.850490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.850666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.850692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.850849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.850874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.851008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.851048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.851286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.851312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.851514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.851542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.851715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.851746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.851944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.851971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.852153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.852178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.852360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.852402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.852550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.852575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.852773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.852801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.852939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.852968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.853156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.853182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.853424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.853451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.853601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.853629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.853825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.853850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.854011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.854035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.854192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.854217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.854379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.854405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.854568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.854612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.854813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.854841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.855053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.855219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.855427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.855633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.855816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.855974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.856176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.856345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.856495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.856703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.856905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.856935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.857091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.857124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.857297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.857321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.857522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.857550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.857746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.857773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.857924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.857949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.858082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.858128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.858295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.858322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.858532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.858557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.858760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.858787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.858945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.858970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.859130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.859156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.859331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.859358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.859563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.859588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.859722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.859750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.859904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.859931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.860099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.860148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.860330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.860355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.860532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.860560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.860727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.860755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.860928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.860953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.861087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.861116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.861275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.861300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.861449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.861474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.861648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.861677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.861887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.861912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.862066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.862092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.862268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.862297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.862473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.862501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.862665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.862689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.862862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.862890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.863086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.863121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.863308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.863335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-07-25 00:14:16.863497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-07-25 00:14:16.863525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.863701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.863729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.863937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.863962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.864113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.864143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.864319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.864346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.864525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.864550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.864723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.864750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.864917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.864944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.865126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.865152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.865323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.865351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.865521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.865546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.865696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.865721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.865899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.865926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.866127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.866156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.866330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.866355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.866568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.866596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.866771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.866798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.866968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.866992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.867196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.867224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.867474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.867501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.867681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.867708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.867846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.867875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.868030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.868069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.868246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.868272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.868432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.868457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.868655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.868683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.868856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.868882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.869057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.869085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.869276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.869301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.869461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.869486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.869660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.869687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.869826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.869853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.870000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.870024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.870225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.870254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.870464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.870489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.870627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.870652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.870814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.870839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.871079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.871111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.871296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.871322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.871564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.871592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.871772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.871797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.871982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.872186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.872391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.872575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.872726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.872964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.872992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.873231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.873258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.873469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.873496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.873667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.873695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.873850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.873875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.874044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.874072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.874233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.874259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.874386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.874413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.874583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.874612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.874818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.874843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.875056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.875083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.875296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.875321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.875508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.875533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.875711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.875736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.875932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.875960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.876164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.876194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.876353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.876378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.876513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.876538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.876741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.876768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.876943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.876968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.877145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.877173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.877344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.877371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.877562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.877587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.877730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.877772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.877953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.877979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.878111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.878137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.878290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.878315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.878465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.878508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.878688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.878713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-07-25 00:14:16.878954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-07-25 00:14:16.878995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.879173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.879202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.879382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.879407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.879621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.879649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.879817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.879844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.880027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.880052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.880200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.880245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.880454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.880482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.880662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.880687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.880843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.880870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.881047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.881076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.881324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.881349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.881559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.881587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.881789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.881817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.881997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.882024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.882265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.882291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.882466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.882490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.882646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.882672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.882816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.882845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.883041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.883069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.883256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.883282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.883460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.883488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.883685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.883712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.883896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.883920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.884077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.884108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.884276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.884301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.884463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.884492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.884669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.884696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.884891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.884918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.885122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.885148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.885333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.885362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.885562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.885587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.885744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.885769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.885946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.885975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.886172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.886201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.886377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.886402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.886580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.886608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.886793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.886818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.886976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.887001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.887208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.887237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.887415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.887445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.887599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.887623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.887784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.887809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.887990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.888015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.888178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.888203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.888458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.888486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.888664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.888692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.888872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.888897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.889116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.889159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.889302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.889327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.889464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.889488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.889672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.889696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.889901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.889928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.890120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.890145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.890276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.890301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.890449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.890476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.890623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.890648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.890820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.890848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.891015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.891042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.891243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.891268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.891437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.891464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.891649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.891676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.891859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.891884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.892099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.892302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.892484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.892647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.892826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.892997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.893023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.893230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.893259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.893454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.893482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.893725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.893750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.893954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.893981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.894183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.894211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.894391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.894418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.894624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.894652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.894892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.894917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.895097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.895155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.895344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-07-25 00:14:16.895369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-07-25 00:14:16.895526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.895555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.895712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.895738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.895897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.895922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.896053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.896094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.896281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.896307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.896513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.896540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.896738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.896897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.897959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.897984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.898157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.898183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.898357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.898386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.898628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.898654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.898810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.898835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.898992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.899017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.899233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.899258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.899407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.899435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.899607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.899634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.899816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.899841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.899977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.900163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.900326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.900510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.900693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.900880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.900905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.901083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.901118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.901372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.901414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.901575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.901600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.901757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.901783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.901962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.901990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.902161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.902186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.902425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.902453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.902656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.902681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.902815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.902840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.903087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.903120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.903308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.903333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.903493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.903517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.903649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.903693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.903905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.903930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.904087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.904116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.904246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.904272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.904524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.904551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.904730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.904755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.904888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.904913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.905128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.905156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.905332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.905356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.905514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.905538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.905696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.905719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.905849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.905873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.906009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.906033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.906221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.906248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.906423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.906446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.906682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.906706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.906930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.907125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.907310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.907509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.907707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.907865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.907994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.908019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.908170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.908195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.908372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.908400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.908650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.908677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.908858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.908887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.909086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.909119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.909269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.909297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.909456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.909482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.909667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.909695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.909868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.909898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.910076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.910108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.910272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.910298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.910474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.910502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.910674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.910699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.910858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.910882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.911061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.911089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.911282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.911308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.911495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.911523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-07-25 00:14:16.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-07-25 00:14:16.911733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.911911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.911936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.912065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.912090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.912248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.912286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.912478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.912505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.912660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.912689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.912850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.912875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.913059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.913087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.913276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.913302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.913480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.913509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.913693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.913718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.913904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.913932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.914944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.914971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.915169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.915208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.915335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.915362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.915542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.915571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.915744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.915772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.915925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.915950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.916080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.916112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.916283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.916308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.916435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.916460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.916650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.916678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.916892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.916917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.917046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.917071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.917242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.917268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.917431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.917475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.917657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.917682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.917863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.918060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.918085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.918251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.918276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.918406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.918431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.918568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.918593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.918772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.918799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.919935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.919960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.920150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.920192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.920320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.920345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.920523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.920551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.920722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.920747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.920904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.920929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.921086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.921120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.921269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.921294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.921451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.921476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.921661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.921690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.921887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.921915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.922111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.922137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.922272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.922296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.922453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.922480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.922688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.922713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.922896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.922924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.923076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.923110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.923265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.923290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.923500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.923527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.923680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.923709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.923870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.923895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.924099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.924283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.924469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.924646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.924829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.924984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.925164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.925328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.925490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.925694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-07-25 00:14:16.925894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-07-25 00:14:16.925922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.926918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.926946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.927153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.927179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.927315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.927339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.927511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.927539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.927715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.927740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.927901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.927942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.928114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.928142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.928308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.928333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.928512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.928540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.928702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.928727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.928884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.928909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.929053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.929081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.929283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.929322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.929531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.929558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.929729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.929757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.929934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.929962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.930129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.930157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.930323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.930349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.930528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.930556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.930722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.930747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.930883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.930909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.931109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.931166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.931363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.931389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.931532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.931558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.931697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.931738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.931894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.931919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.932049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.932074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.932299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.932325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.932477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.932503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.932655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.932683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.932856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.932886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.933070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.933095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.933249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.933276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.933436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.933479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.933636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.933661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.933798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.933839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.934944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.934969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.935132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.935171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.935421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.935449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.935655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.935681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.935859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.935887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.936062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.936091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.936251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.936277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.936447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.936474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.936685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.936848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.936873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.937962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.937987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.938156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.938200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.938364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.938389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.938544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.938572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.938759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.938784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.938936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.938961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.939197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.939223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.939377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.939417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.939572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.939597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.939762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.939788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.939974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.940822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.940989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.941017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.941190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.941216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.941370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.941395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.941550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.297 [2024-07-25 00:14:16.941575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.297 qpair failed and we were unable to recover it. 00:34:11.297 [2024-07-25 00:14:16.941747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.941775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.941980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.942183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.942343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.942527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.942688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.942880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.942905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.943041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.943066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.943236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.943261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.943395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.943420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.943576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.943600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.943753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.943782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.944042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.944069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.944279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.944305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.944465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.944493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.944733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.944758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.944938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.944963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.945095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.945125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.945287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.945311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.945483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.945522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.945708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.945752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.945920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.945946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.946077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.946111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.946272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.946297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.946458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.946483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.946668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.946711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.946857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.946901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.947037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.947062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.947224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.947267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.947457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.947485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.947687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.947730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.947893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.947937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.948068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.948098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.948253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.948296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.948448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.948491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.948677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.948721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.948859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.948885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.949018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.949044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.949226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.949271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.949446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.949474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.949655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.949699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.949859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.949884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.950041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.950067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.950215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.950259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.950426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.950469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.950650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.950693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.950834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.950859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.951041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.951066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.951226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.951271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.951449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.951494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.951677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.951720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.951848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.951873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.952037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.952238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.952433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.952668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.952831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.952980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.953179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.953378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.953606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.953806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.953963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.953989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.954169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.954213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.954373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.954415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.954589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.954632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.954757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.954782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.954915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.954941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.955137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.955290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.955499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.955653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.955816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.955984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.298 [2024-07-25 00:14:16.956009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.298 qpair failed and we were unable to recover it. 00:34:11.298 [2024-07-25 00:14:16.956182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.956227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.956359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.956385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.956545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.956587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.956768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.956793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.956951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.956977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.957119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.957146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.957333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.957376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.957527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.957569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.957703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.957730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.957918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.957943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.958079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.958111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.958271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.958314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.958469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.958497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.958694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.958737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.958899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.958924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.959060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.959085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.959281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.959456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.959500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.959651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.959693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.959857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.959882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.960027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.960066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.960253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.960284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.960431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.960459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.960629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.960657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.960906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.960934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.961135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.961182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.961313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.961339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.961545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.961573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.961719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.961747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.961952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.961980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.962189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.962215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.962393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.962420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.962595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.962623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.962771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.962798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.962998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.963026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.963177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.963203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.963363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.963406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.963634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.963663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.963836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.963863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.964043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.964071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.964251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.964276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.964442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.964469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.964621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.964661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.964835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.964863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.965015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.965041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.965203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.965229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.965404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.965432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.965671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.965698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.965873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.965900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.966098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.966134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.966308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.966333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.966534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.966561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.966715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.966743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.966915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.966943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.967118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.967146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.967292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.967317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.967520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.967547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.967718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.967746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.967942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.967969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.968151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.968176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.968315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.968340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.968555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.968580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.968795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.968823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.969968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.969996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.970157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.970182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.970361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.970389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.970561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.970588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.970762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.970790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.970958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.970987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.971211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.971251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.971417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.971461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.971609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.971653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.971835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.971877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.972031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.972057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.972209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.972235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.972423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.299 [2024-07-25 00:14:16.972469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.299 qpair failed and we were unable to recover it. 00:34:11.299 [2024-07-25 00:14:16.972651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.972677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.972831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.972874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.973957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.973985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.974185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.974211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.974385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.974413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.974580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.974608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.974785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.974815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.974998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.975153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.975311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.975513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.975711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.975910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.975938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.976114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.976156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.976293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.976318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.976473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.976497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.976674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.976701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.976894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.976921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.977069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.977096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.977264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.977293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.977446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.977474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.977649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.977677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.977845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.977873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.978067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.978095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.978282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.978308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.978484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.978512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.978718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.978745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.978942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.978969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.979157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.979183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.979422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.979447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.979630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.979660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.979856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.979884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.980080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.980113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.980275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.980300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.980431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.980472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.980614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.980642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.980797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.980839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.981017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.981042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.981175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.981201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.981356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.981383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.981550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.981577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.981755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.981790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.982022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.982061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.982231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.982258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.982442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.982485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.982670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.982712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.982891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.982934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.983121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.983147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.983354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.983398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.983570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.983612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.983765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.983793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.983984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.984010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.984216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.984260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.984445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.984488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.984665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.984708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.984916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.984957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.985170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.985214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.300 [2024-07-25 00:14:16.985383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.300 [2024-07-25 00:14:16.985410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.300 qpair failed and we were unable to recover it. 00:34:11.576 [2024-07-25 00:14:16.985615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-07-25 00:14:16.985657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.985838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.985885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.986047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.986271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.986492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.986681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.986833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.986990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.987015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.987219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.987263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.987469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.987497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.987693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.987737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.987902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.987927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.988068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.988094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.988288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.988333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.988510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.988552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.988766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.988810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.988972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.988997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.989162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.989188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.989382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.989426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.989638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.989681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.989860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.989904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.990064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.990090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.990274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.990317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.990503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.990545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.990720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.990761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.990914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.990939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.991095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.991125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.991309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.991334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.991512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.991556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.991736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.991781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.991941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.991966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.992123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.992149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.992331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.992374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.992559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.992603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.992810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.992853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.993004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.993029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-07-25 00:14:16.993235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-07-25 00:14:16.993278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.993436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.993478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.993694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.993735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.993919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.993944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.994120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.994163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.994323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.994372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.994584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.994625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.994808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.994833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.994993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.995163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.995395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.995620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.995824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.995970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.995995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.996175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.996218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.996395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.996441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.996651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.996693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.996875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.996901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.997038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.997063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.997258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.997302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.997452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.997494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.997704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.997746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.997926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.997951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.998084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.998115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.998298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.998341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.998526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.998569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.998725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.998768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.998929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.998955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.999149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.999176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.999359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.999401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.999586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.999628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:16.999808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:16.999836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.000032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:17.000057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.000274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:17.000318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.000525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:17.000567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.000752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:17.000795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.000926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-07-25 00:14:17.000951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-07-25 00:14:17.001121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.001148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.001335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.001381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.001588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.001630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.001807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.001849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.001986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.002011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.002192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.002236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.002445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.002488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.002672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.002714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.002900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.002930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.003113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.003139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.003288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.003330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.003491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.003535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.003716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.003758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.003890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.003916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.004099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.004129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.004305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.004348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.004526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.004571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.004745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.004788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.004951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.004977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.005156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.005185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.005348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.005392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.005581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.005623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.005805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.005849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.006040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.006065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.006267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.006294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.006454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.006496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.006627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.006653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.006834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.006860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.007021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.007046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.007229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.007272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.007421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.007463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.007651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.007694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.007851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.007876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.008057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.008082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.008268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.008312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.008529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.008571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-07-25 00:14:17.008733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-07-25 00:14:17.008759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.008896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.008921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.009058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.009083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.009310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.009353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.009575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.009619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.009804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.009848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.009988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.010014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.010189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.010232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.010558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.010600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.010803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.010846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.011029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.011055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.011269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.011297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.011515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.011562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.011750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.011793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.011976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.012001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.012177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.012220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.012393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.012436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.012617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.012662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.012819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.012845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.013016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.013042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.013193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.013241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.013424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.013467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.013646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.013689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.013819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.013844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.014028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.014053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.014238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.014280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.014443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.014485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.014657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.014704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.014891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.014916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.015098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.015129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.015308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.015350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.015554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.015597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.015779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.015824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.015982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.016007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.016183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.016235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.016416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.016460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-07-25 00:14:17.016637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-07-25 00:14:17.016686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.016842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.016867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.016991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.017015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.017219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.017262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.017438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.017481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.017635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.017678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.017830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.017855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.018019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.018044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.018223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.018267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.018462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.018505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.018716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.018759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.018900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.018925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.019111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.019137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.019281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.019325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.019499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.019541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.019725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.019770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.019892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.019921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.020080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.020110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.020321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.020363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.020516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.020560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.020736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.020779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.020934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.020961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.021141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.021170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.021373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.021415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.021597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.021644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.021778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.021803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.021961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.021986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.022187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.022230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.022413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.022454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.022661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.022702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-07-25 00:14:17.022878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-07-25 00:14:17.022905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.023097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.023131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.023313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.023356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.023561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.023604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.023784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.023828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.023984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.024189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.024387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.024640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.024807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.024965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.024990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.025168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.025210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.025391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.025434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.025618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.025661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.025852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.025877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.026008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.026034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.026211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.026254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.026430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.026472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.026647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.026689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.026850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.026876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.027033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.027058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.027215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.027243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.027436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.027482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.027727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.027866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.027891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.028059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.028084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.028272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.028318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.028500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.028544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.028748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.028791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.028956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.028982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.029183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.029230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.029410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.029453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.029630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.029673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.029832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.029858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.030052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.030078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.030255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.030298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.030451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-07-25 00:14:17.030494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-07-25 00:14:17.030676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.030719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.030846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.030871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.031025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.031051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.031244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.031289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.031443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.031486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.031691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.031734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.031892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.031917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.032069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.032094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.032256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.032299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.032475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.032517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.032700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.032745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.032876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.032901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.033059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.033084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.033293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.033336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.033508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.033550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.033701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.033745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.033933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.033958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.034113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.034154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.034336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.034383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.034544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.034587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.034769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.034811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.034965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.034990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.035119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.035144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.035322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.035350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.035542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.035585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.035751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.035777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.035930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.035956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.036134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.036178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.036365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.036408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.036588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.036635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.036820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.036846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.037003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.037028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.037222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.037265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.037449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.037493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.037650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.037693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.037874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.037900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-07-25 00:14:17.038031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-07-25 00:14:17.038056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.038231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.038273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.038463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.038490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.038641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.038684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.038866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.038891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.039050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.039077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.039264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.039307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.039490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.039534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.039712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.039756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.039918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.039943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.040081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.040111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.040324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.040367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.040551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.040595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.040807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.040850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.041035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.041061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.041236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.041262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.041435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.041477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.041680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.041723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.041903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.041950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.042135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.042162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.042370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.042413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.042594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.042638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.042802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.042844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.043031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.043056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.043239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.043283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.043458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.043501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.043688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.043731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.043879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.043905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.044087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.044118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.044298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.044324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.044494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.044537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.044753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.044794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.044985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.045011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.045220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.045269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.045442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.045485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.045653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.045697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-07-25 00:14:17.045851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-07-25 00:14:17.045892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.046053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.046089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.046257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.046301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.046481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.046523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.046697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.046739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.046923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.046949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.047110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.047154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.047339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.047382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.047530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.047558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.047759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.047802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.047986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.048012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.048200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.048244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.048423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.048470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.048655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.048698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.048881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.048907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.049056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.049083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.049272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.049318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.049494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.049542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.049725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.049768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.049894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.049919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.050049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.050076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.050271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.050314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.050470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.050512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.050691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.050735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.050891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.050916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.051071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.051096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.051284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.051327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.051481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.051523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.051695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.051741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.051895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.051921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.052081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.052111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.052263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.052307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.052490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.052533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.052709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.052755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.052915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.052941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.053122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.053149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-07-25 00:14:17.053346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-07-25 00:14:17.053391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.053568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.053614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.053766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.053808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.053971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.053997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.054167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.054195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.054422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.054465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.054675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.054718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.054892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.054934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.055090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.055121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.055281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.055306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.055516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.055544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.055774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.055816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.056013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.056038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.056206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.056232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.056423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.056450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.056636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.056678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.056852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.056895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.057030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.057055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.057242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.057286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.057437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.057462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.057625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.057652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.057836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.057861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.058020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.058045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.058230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.058273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.058460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.058502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.058694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.058720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.058899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.058924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.059110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.059136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.059315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.059359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.059579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.059621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.059804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.059847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.060006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-07-25 00:14:17.060032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-07-25 00:14:17.060216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.060259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.060436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.060481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.060657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.060699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.060856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.060883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.061044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.061069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.061240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.061282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.061466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.061508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.061669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.061712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.061872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.061897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.062049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.062074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.062253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.062297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.062502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.062545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.062731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.062774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.062929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.062954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.063145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.063188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.063339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.063382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.063560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.063603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.063782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.063823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.063979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.064005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.064163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.064193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.064366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.064395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.064618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.064660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.064824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.064849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.065034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.065059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.065235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.065278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.065449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.065492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.065696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.065738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.065919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.065944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.066127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.066155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.066337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.066379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.066588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.066631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.066807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.066849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.067033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.067059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.067236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.067280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.067437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.067481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.067687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.067730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-07-25 00:14:17.067890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-07-25 00:14:17.067920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.068075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.068100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.068298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.068342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.068523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.068566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.068747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.068792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.068917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.068942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.069124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.069153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.069342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.069390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.069579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.069622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.069801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.069844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.070006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.070031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.070189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.070215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.070424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.070452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.070648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.070691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.070855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.070881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.071071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.071096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.071309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.071352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.071559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.071601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.071782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.071826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.071961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.071986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.072121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.072148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.072368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.072619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.072661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.072844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.072869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.073932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.073957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.074096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.074127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.074313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.074357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.074572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.074615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.074792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.074818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.074970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.074995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.075123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.075148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.075310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-07-25 00:14:17.075352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-07-25 00:14:17.075537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.075581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.075775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.075800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.075977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.076002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.076178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.076226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.076407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.076450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.076627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.076670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.076828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.076852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.077036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.077261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.077500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.077687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.077845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.077979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.078004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.078157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.078184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.078386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.078428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.078609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.078651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.078807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.078832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.078993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.079018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.079155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.079183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.079392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.079419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.079610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.079637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.079816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.079842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.080028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.080054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.080259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.080302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.080502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.080531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.080698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.080742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.080866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.080891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.081043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.081068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.081260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.081308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.081556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.081597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.081800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.081843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.082026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.082051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.082233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.082276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.082458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.082502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.082684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.082856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.082881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-07-25 00:14:17.083038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-07-25 00:14:17.083064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.083278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.083321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.083478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.083520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.083668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.083696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.083838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.083865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.084023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.084049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.084258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.084302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.084482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.084534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.084714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.084757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.084915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.084940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.085097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.085134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.085315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.085358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.085539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.085584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.085794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.085836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.086000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.086025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.086211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.086254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.086450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.086492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.086651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.086693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.086855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.086881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.087036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.087063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.087254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.087299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.087492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.087535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.087749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.087791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.087959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.087984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.088120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.088146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.088330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.088372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.088574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.088617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.088773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.088798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.088955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.088981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.089159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.089188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.089382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.089425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.089604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.089651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.089832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.089856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.090039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.090064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.090227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.090271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.090453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.090496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-07-25 00:14:17.090701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-07-25 00:14:17.090743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.090881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.090907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.091094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.091125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.091311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.091358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.091532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.091576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.091780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.091823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.091952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.091977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.092150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.092179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.092367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.092395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.092620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.092663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.092844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.092870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.093052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.093082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.093268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.093312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.093437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.093463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.093680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.093723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.093857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.093882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.094039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.094065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.094258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.094303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.094483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.094526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.094699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.094744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.094922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.094947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.095108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.095134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.095300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.095342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.095519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.095561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.095753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.095796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.095959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.095985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.096193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.096222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.096391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.096434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.096620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.096662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.096823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.096850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.096984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.097010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.097180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.097223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-07-25 00:14:17.097437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-07-25 00:14:17.097480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.097632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.097675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.097834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.097860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.097998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.098024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.098205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.098248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.098432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.098475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.098666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.098709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.098869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.098895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.099077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.099107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.099290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.099333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.099525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.099551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.099736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.099778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.099959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.099985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.100115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.100141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.100293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.100335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.100546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.100590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.100757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.100783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.100943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.100970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.101169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.101198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.101391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.101437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.101641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.101683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.101838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.101864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.102026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.102051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.102262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.102305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.102567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.102737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.102764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.102931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.102956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.103112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.103138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.103289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.103333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.103540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.103582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.103791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.103833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.103990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.104015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.104166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.104210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.104406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.104433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.104641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.104684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.104843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.104868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.105025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-07-25 00:14:17.105050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-07-25 00:14:17.105196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.105239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.105398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.105439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.105647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.105690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.105811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.105837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.106022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.106047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.106201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.106245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.106399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.106443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.106660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.106703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.106860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.106885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.107026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.107051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.107237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.107281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.107454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.107497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.107676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.107725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.107863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.107889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.108072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.108098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.108261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.108305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.108461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.108504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.108756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.108915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.108942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.109098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.109142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.109317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.109363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.109514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.109557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.109741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.109787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.109938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.109963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.110097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.110132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.110306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.110349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.110528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.110575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.110733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.110777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.110962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.110987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.111157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.111187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.111361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.111403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.111533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.111559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.111748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.111775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.111912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.111939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.112096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.112127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.112335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.112379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-07-25 00:14:17.112580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-07-25 00:14:17.112608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.112754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.112780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.112963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.112988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.113168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.113198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.113377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.113413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.113568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.113593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.113776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.113819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.113977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.114003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.114180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.114223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.114429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.114472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.114665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.114708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.114872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.114897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.115031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.115058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.115233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.115277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.115457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.115499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.115692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.115718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.115904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.115929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.116090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.116127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.116285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.116311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.116515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.116558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.116729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.116771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.116952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.116978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.117154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.117198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.117415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.117458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.117671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.117714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.117860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.117885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.118042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.118072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.118286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.118329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.118504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.118533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.118709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.118752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.118916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.118942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.119129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.119165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.119351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.119394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.119582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.119623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.119785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.119813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.119987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.120013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.120196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-07-25 00:14:17.120248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-07-25 00:14:17.120458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.120500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.120688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.120731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.120863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.120890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.121053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.121078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.121305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.121348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.121517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.121558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.121743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.121784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.121916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.121940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.122100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.122130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.122320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.122345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.122527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.122569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.122736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.122777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.122961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.122985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.123186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.123230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.123406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.123446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.123600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.123643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.123804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.123847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.124010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.124035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.124216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.124260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.124437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.124481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.124691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.124733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.124871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.124896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.125083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.125119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.125298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.125342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.125519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.125562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.125717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.125745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.125898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.125923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.126052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.126077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.126250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.126294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.126451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.126484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.126681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.126723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.126858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.126884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.127018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.127044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.127217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.127261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.127430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.127457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-07-25 00:14:17.127609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-07-25 00:14:17.127651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.127791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.127816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.127952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.127977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.128113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.128138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.128313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.128356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.128538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.128582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.128739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.128764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.128886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.128911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.129058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.129084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.129273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.129317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.129468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.129510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.129695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.129739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.129896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.129921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.130049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.130074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.130273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.130318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.130471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.130514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.130672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.130714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.130871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.130897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.131058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.131083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.131259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.131302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.131455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.131497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.131655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.131701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.131829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.131854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.132958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.132984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.133149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.133178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.133398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.133426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.133594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.133638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.133769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-07-25 00:14:17.133795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-07-25 00:14:17.133939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.133964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.134100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.134136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.134318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.134363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.134546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.134589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.134748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.134773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.134909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.134934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.135069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.135096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.135290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.135334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.135488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.135531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.135685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.135727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.135855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.135880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.136020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.136045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.136251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.136295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.136450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.136492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.136668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.136715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.136853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.136880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.137039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.137064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.137267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.137310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.137461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.137506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.137674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.137718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.137852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.137877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.138032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.138057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.138237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.138265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.138425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.138467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.138641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.138684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.138843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.138868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.139048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.139073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.139238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.139282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.139446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.139488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.139671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.139700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.139871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.139896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.140036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.140061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.140264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.140308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.140491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.140535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.140686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.140728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-07-25 00:14:17.140862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-07-25 00:14:17.140887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.141038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.141063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.141231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.141275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.141433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.141477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.141663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.141705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.141842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.141868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.142964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.142990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.143158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.143187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.143357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.143385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.143588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.143631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.143790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.143816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.143976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.144182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.144381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.144602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.144782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.144968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.144993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.145165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.145194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.145359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.145401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.145583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.145626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.145761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.145786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.145957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.145982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.146143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.146169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.146306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.146333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.146495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.146537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.146744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.146786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.146951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.146977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.147112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.147138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.147326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.147374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.147563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.147607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.147745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.147771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-07-25 00:14:17.147949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-07-25 00:14:17.147974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.148182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.148225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.148384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.148428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.148595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.148639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.148777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.148803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.148994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.149177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.149364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.149567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.149762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.149959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.149989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.150129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.150157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.150335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.150381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.150560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.150605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.150738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.150764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.150917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.150943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.151081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.151111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.151303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.151346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.151507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.151549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.151733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.151775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.151937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.151962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.152168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.152213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.152366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.152411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.152566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.152593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.152745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.152771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.152901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.152926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.153058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.153085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.153277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.153324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.153473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.153516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.153670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.153699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.153854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.153879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.154039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.154064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.154230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.154275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.154455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.154497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.154659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.154702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.154860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.154886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-07-25 00:14:17.155051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-07-25 00:14:17.155075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.155263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.155307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.155464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.155505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.155712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.155739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.155886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.155912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.156038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.156063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.156226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.156272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.156455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.156498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.156658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.156701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.156864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.156890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.157025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.157051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.157232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.157276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.157487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.157531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.157695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.157739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.157893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.157922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.158069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.158095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.158317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.158361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.158584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.158627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.158812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.158858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.158998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.159023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.159201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.159245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.159398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.159441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.159597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.159640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.159769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.159795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.159978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.160003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.160188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.160231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.160416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.160459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.160621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.160664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.160827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.160853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.160994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.161019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.161148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.161174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-07-25 00:14:17.161306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-07-25 00:14:17.161332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.161499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.161524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.161683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.161709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.161874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.161900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.162041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.162066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.162233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.162278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.162463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.162506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.162682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.162725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.162859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.162885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.163075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.163100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.163296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.163340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.163553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.163581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.163753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.163797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.163932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.163958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.164121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.164147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.164307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.164351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.164534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.164582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.164732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.164775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.164902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.164927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.165066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.165091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.165293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.165336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.165529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.165572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.165751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.165779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.165931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.165960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.166091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.166123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.166306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.166348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.166532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.166575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.166754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.166796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.166934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.166960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.167095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.167127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.167279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.167323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.167500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.167542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.167733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.167760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.167894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.167919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.168107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.168132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.168331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.168376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-07-25 00:14:17.168565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-07-25 00:14:17.168593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.168801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.168830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.168981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.169006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.169166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.169195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.169370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.169411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.169592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.169617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.169773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.169799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.169981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.170166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.170360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.170566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.170788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.170965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.170990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.171153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.171182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.171388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.171421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.171622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.171667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.171809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.171835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.171993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.172181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.172360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.172585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.172789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.172954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.172979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.173164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.173193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.173364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.173406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.173566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.173591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.173769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.173795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.173955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.173980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.174119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.174145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.174323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.174369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.174552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.174580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.174734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.174759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.174918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.174944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.175096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.175130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.175285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.175329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.175500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-07-25 00:14:17.175543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-07-25 00:14:17.175681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.175708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.175844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.175870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.176046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.176250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.176466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.176686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.176842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.176978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.177160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.177367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.177541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.177700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.177883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.177908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.178086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.178118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.178290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.178316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.178485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.178528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.178685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.178728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.178861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.178886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.179068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.179274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.179476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.179671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.179855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.180017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.180175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.180218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.180427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.180470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.180652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.180695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.180831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.180857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.181017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.181042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.181196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.181239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.181389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.181431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.181616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.181660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.181826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.181851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.182012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.182037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.182237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.182422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-07-25 00:14:17.182465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-07-25 00:14:17.182623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.182666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.182799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.182825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.182987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.183195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.183402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.183624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.183800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.183964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.183990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.184120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.184146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.184297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.184341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.184498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.184541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.184732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.184757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.184884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.184909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.185065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.185091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.185266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.185310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.185510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.185538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.185727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.185770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.185904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.185931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.186088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.186120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.186284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.186327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.186497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.186523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.186672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.186715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.186852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.186882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.187945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.187970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.188173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.188216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.188372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.188413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.188624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.188667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.188824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.188849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.189003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.189028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.189211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.189255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-07-25 00:14:17.189413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-07-25 00:14:17.189456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.189636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.189679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.189811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.189837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.189963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.189989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.190169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.190213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.190369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.190411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.190596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.190639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.190820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.190845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.190984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.191189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.191385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.191606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.191778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.191935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.191959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.192145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.192173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.192343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.192386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.192547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.192575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.192743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.192768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.192908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.192934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.193091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.193133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.193289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.193334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.193516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.193544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.193717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.193761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.193923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.193948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.194077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.194108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.194266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.194310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.194487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.194530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.194702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.194748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.194885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.194911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.195093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.195124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.195281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.195324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-07-25 00:14:17.195496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-07-25 00:14:17.195538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.195692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.195720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.195898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.195923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.196081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.196110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.196300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.196343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.196500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.196543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.196713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.196755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.196893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.196918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.197057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.197084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.197261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.197305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.197488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.197531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.197690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.197733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.197898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.197923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.198114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.198140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.198290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.198334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.198506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.198533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.198680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.198726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.198860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.198885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.199014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.199039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.199212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.199255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.199414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.199456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.199619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.199662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.199821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.199847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.200010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.200036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.200207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.200251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.200429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.200472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.200661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.200705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.200863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.200889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.201048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.201073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.201260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.201304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.201487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.201530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.201737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.201780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.201928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.201952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.202084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.202115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.202265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.202308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.202488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.202530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-07-25 00:14:17.202674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-07-25 00:14:17.202724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.202885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.202910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.203064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.203089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.203253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.203301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.203510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.203553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.203729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.203772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.203926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.203952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.204082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.204114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.204298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.204341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.204524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.204566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.204771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.204813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.204976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.205003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.205209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.205252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.205436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.205482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.205641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.205683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.205836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.205860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.205988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.206014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.206189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.206234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.206442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.206484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.206643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.206685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.206812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.206838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.207019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.207044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.207195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.207237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.207396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.207424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.207616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.207660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.207830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.207872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.208003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.208028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.208207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.208251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.208428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.208477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.208685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.208727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.208889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.208914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.209092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.209123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.209329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.209374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.209535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.209576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.209754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.209796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.209926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-07-25 00:14:17.209951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-07-25 00:14:17.210150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.210178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.210367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.210408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.210559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.210602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.210760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.210804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.210931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.210964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.211125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.211151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.211308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.211351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.211497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.211538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.211747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.211789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.211971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.211997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.212174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.212218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.212371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.212398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.212610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.212638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.212804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.212829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.212989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.213014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.213168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.213196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.213389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.213432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.213615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.213657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.213824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.213849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.214008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.214034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.214179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.214223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.214409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.214452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.214659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.214701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.214885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.214910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.215066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.215092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.215270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.215313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.215472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.215514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.215692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.215735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.215898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.215923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.216110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.216135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.216295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.216339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.216549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.216591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.216738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.216780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.216938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.216963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.217096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.217138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.217326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-07-25 00:14:17.217369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-07-25 00:14:17.217547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.217593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.217747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.217789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.217944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.217969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.218169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.218213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.218384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.218427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.218611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.218654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.218849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.218891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.219074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.219099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.219317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.219368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.219579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.219621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.219783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.219826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.219977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.220002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.220154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.220182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.220416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.220458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.220664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.220707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.220869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.220895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.221053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.221078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.221297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.221340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.221531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.221557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.221767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.221809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.221933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.221958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.222093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.222125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.222288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.222331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.222467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.222492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.222669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.222697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.222899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.222924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.223060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.223085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.223260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.223302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.223478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.223519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.223711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.223738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.223897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.223922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.224077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.224107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.224280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.224322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.224489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.224515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.224697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-07-25 00:14:17.224740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-07-25 00:14:17.224907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.224933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.225090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.225134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.225319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.225363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.225550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.225593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.225776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.225821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.226004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.226029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.226241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.226269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.226500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.226541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.226751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.226793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.226947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.226972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.227170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.227199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.227426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.227468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.227648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.227691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.227852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.227882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.228019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.228044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.228225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.228269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.228441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.228484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.228642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.228685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.228845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.228871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.229031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.229056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.229237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.229280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.229463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.229506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.229691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.229735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.229897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.229922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.230063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.230088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.230267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.230309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.230453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.230496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.230648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.230691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.230856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.230881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.231040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.231065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-07-25 00:14:17.231223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-07-25 00:14:17.231268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.231438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.231464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.231627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.231669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.231800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.231825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.232014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.232039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.232222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.232264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.232402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.232444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.232625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.232667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.232863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.232889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.233020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.233044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.233229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.233273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.233477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.233520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.233727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.233769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.233902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.233927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.234063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.234089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.234280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.234323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.234510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.234553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.234716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.234758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.234918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.234944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.235108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.235134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.235294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.235320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.235515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.235541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.235750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.235793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.235954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.235984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.236173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.236216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.236368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.236415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.236597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.236622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.236753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.236783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.236967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.236992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.237177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.237202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.237342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.237367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.237551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.237577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.237731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.237756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.237923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.237948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.238113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.238139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.238314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.238358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-07-25 00:14:17.238545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-07-25 00:14:17.238588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.238766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.238809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.238969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.238996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.239134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.239161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.239312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.239359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.239566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.239608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.239739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.239764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.239903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.239928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.240089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.240123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.240283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.240326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.240489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.240531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.240710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.240754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.240913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.240938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.241093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.241132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.241323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.241366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.241542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.241585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.241764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.241806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.241993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.242018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.242189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.242218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.242418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.242460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.242636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.242682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.242878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.242903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.243085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.243116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.243276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.243319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.243489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.243532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.243739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.243781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.243939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.243965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.244144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.244178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.244348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.244393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.244578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.244621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.244837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.244879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.245050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.245245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.245403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.245623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-07-25 00:14:17.245814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-07-25 00:14:17.245981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.246184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.246380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.246586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.246785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.246970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.246997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.247200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.247244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.247453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.247494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.247676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.247723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.247884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.247910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.248091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.248122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.248303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.248347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.248543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.248568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.248729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.248770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.248901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.248926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.249117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.249143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.249300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.249343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.249548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.249591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.249781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.249823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.249949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.249974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.250156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.250185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.250385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.250428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.250638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.250682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.250860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.250885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.251044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.251070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.251254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.251284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.251481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.251523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.251728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.251771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.251928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.251953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.252113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.252155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.252312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.252355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.252507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.252555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.252761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.252803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.252963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.252988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.253161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.253190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.253410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-07-25 00:14:17.253452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-07-25 00:14:17.253663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.253705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.253886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.253911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.254067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.254093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.254279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.254324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.254511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.254537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.254715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.254757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.254911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.254936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.255093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.255125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.255297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.255339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.255557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.255599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.255759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.255801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.255985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.256161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.256361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.256585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.256785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.256968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.256994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.257176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.257219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.257383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.257426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.257578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.257621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.257779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.257805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.257963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.257988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.258144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.258174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.258365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.258411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.258569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.258612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.258745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.258770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.258895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.258920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.259080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.259111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.259318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.259360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.259488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.259515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.259664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.259708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.259890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.259915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.260048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.260074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.260236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.260280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.260467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.260656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-07-25 00:14:17.260702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-07-25 00:14:17.260838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.260864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.261047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.261072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.261235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.261279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.261490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.261531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.261715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.261740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.261904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.261930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.262114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.262140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.262322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.262365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.262527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.262570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.262751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.262793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.262953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.262978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.263150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.263179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.263410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.263453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.263639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.263682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.263842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.263868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.264055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.264080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.264266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.264310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.264512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.264554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.264747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.264791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.264928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.264953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.265127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.265153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.265334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.265378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.265556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.265583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.265785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.265828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.265988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.266014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.266200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.266244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.266417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.266444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.266623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.266665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.266871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.266913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.267069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.267094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-07-25 00:14:17.267318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-07-25 00:14:17.267361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.267546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.267592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.267766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.267808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.267944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.267969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.268169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.268214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.268367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.268397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.268571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.268612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.268796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.268821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.268945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.268970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.269126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.269156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.269337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.269380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.269587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.269629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.269790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.269815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.269965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.269991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.270193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.270239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.270398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.270441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.270666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.270708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.270865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.270891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.271051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.271076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.271270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.271313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.271493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.271536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.271738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.271781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.271946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.271971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.272152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.272180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.272369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.272397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.272590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.272633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.272792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.272817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.272975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.273001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.273184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.273227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.273412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.273454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.273638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.273685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.273822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.273847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-07-25 00:14:17.274004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-07-25 00:14:17.274029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.895 [2024-07-25 00:14:17.274200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.895 [2024-07-25 00:14:17.274244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.895 qpair failed and we were unable to recover it. 00:34:11.895 [2024-07-25 00:14:17.274403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.895 [2024-07-25 00:14:17.274447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.895 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.274617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.274659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.274791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.274821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.274959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.274985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.275188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.275231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.275406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.275454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.275671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.275714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.275874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.275899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.276082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.276112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.276260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.276303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.276470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.276513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.276669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.276713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.276867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.276892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.277047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.277072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.277259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.277302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.277489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.277531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.277692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.277734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.277860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.277885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.278068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.278093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.278256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.278298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.278506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.278547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.278702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.278745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.278932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.278957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.279166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.279209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.279376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.279419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.279599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.279647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.279828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.279853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.280009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.280034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.280188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.280232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.280397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.280441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.280629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.280657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.280829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.280854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.281019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.281045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.281246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.281288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.281471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.281513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.281722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.281765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.896 qpair failed and we were unable to recover it. 00:34:11.896 [2024-07-25 00:14:17.281901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-25 00:14:17.281928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.282060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.282085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.282301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.282329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.282501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.282543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.282698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.282742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.282901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.282926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.283090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.283125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.283283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.283326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.283499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.283541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.283709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.283751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.283934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.283959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.284120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.284146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.284307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.284348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.284524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.284568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.284760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.284786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.284944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.284969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.285163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.285192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.285413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.285455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.285663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.285705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.285888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.285913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.286099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.286130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.286300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.286342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.286523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.286568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.286748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.286790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.286948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.286974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.287174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.287203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.287406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.287449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.287662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-25 00:14:17.287705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.897 qpair failed and we were unable to recover it. 00:34:11.897 [2024-07-25 00:14:17.287870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.287894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.288052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.288077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.288292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.288335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.288516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.288558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.288707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.288750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.288921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.288946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.289117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.289144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.289326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.289370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.289586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.289627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.289795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.289821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.289956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.289981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.290154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.290183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.290407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.290449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.290637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.290681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.290841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.290866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.291023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.291049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.291231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.291274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.291426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.291472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.291646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.291696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.291857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.291882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.292017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.292042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.292226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.292266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.292451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.292495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.292674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.292716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.292872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.292897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.293060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.293085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.293269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.293313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.293493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.293539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.293722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.293765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.293921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.293946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.294111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.294138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.294326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.294368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.294523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.294564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.294735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.294762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.294922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.294947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.295112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.295138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.898 [2024-07-25 00:14:17.295300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.898 [2024-07-25 00:14:17.295325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.898 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.295503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.295546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.295720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.295763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.295931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.295973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.296170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.296215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.296404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.296432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.296626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.296674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.296878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.296920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.297079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.297119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.297312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.297356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.297516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.297560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.297740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.297784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.297909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.297935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.298091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.298123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.298281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.298323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.298505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.298548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.298753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.298795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.298950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.298976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.299154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.299182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.299404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.299447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.299678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.299822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.299847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.300028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.300057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.300214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.300258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.300441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.300483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.300675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.300718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.300880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.300906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.301034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.301061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.301272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.301315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.301473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.301517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.301673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.301699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.301857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.301882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-07-25 00:14:17.302012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-07-25 00:14:17.302037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.302218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.302262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.302436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.302479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.302685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.302728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.302890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.302915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.303069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.303094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.303273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.303316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.303475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.303517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.303697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.303743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.303902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.303928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.304055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.304081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.304260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.304303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.304486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.304530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.304713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.304756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.304938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.304963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.305100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.305142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.305295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.305337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.305467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.305493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.305679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.305720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.305881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.305906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.306063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.306088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.306275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.306319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.306495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.306537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.306694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.306737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.306890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.306915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.307100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.307132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.307308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.307351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.307527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.307571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.307742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.307785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.307925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.307951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.308113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.308143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-07-25 00:14:17.308321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-07-25 00:14:17.308364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.308539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.308583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.308763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.308807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.308957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.308982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.309139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.309165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.309344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.309385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.309564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.309610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.309788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.309830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.309984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.310174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.310377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.310606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.310785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.310970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.310996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.311169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.311198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.311389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.311434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.311637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.311681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.311860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.311885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.312068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.312093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.312308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.312337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.312524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.312566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.312774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.312817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.312977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.313002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.313205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.313249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.313430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.313473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.313693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.313735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.313899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.313924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.314085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.314115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.314266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.314310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.314466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-07-25 00:14:17.314508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-07-25 00:14:17.314720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.314762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.314928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.314955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.315115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.315159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.315335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.315377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.315585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.315628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.315836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.315879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.316039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.316064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.316262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.316306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.316509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.316552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.316732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.316780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.316943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.316968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.317115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.317141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.317322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.317366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.317580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.317622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.317770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.317812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.317972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.317997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.318150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.318178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.318377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.318419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.318603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.318645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.318776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.318802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.318936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.318961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.319117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.319143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.319333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.319359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.319537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.319579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.319741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.319767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.319928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.319953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.320116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.320142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.320314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.320342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.320571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.320613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.320785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.320828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.320968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.320993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.321197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.321240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.321412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.321455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.321670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.321712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.321841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.321866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-07-25 00:14:17.322006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-07-25 00:14:17.322032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.322220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.322263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.322475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.322518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.322705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.322748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.322909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.322935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.323075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.323107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.323263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.323305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.323461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.323505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.323724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.323767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.323902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.323928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.324058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.324083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.324276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.324319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.324497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.324542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.324728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.324770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.324931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.324960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.325121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.325147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.325332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.325375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.325584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.325627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.325838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.325880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.326013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.326038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.326224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.326268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.326457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.326499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.326677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.326724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.326885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.326911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.327070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.327095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.327286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.327329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.327509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.327552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.327747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.327790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.327953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.327979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.328179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.328232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.328389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.328417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.328590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.328615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.328784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.328810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.328972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.328998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.329188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.329232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.329387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.329429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-07-25 00:14:17.329638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-07-25 00:14:17.329681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.329867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.329892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.330049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.330074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.330260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.330304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.330467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.330508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.330722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.330750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.330923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.330948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.331076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.331099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.331318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.331361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.331515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.331556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.331732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.331776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.331911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.331936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.332074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.332098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.332291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.332339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.332520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.332563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.332777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.332818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.332945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.332970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.333132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.333158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.333339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.333385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.333559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.333603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.333782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.333825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.334007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.334032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.334213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.334255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.334464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.334506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.334677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.334720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.334847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.334873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.335029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.335054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.335243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.335287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.335472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.335515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.335673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.335716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.335875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.335901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.336080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.336111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.336274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.336317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.336538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.336582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.336782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.336811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.336959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.336984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-07-25 00:14:17.337149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-07-25 00:14:17.337175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.337304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.337331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.337508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.337534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.337692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.337719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.337884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.337909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.338044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.338071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.338240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.338284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.338465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.338507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.338637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.338663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.338825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.338851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.339016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.339041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.339221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.339264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.339444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.339486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.339670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.339712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.339844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.339869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.340029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.340054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.340230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.340274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.340450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.340492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.340676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.340720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.340905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.340930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.341084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.341117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.341245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.341271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.341477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.341524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.341677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.341720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.341903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.341929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.342087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.342120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.342312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.342354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.342542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.342585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.342767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.342814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.342993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.343018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.343160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.343189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.343336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.343363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.343573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.343616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.343815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.343843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.343988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.344013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.344159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.344188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-07-25 00:14:17.344369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-07-25 00:14:17.344413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.344617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.344661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.344817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.344843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.345034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.345060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.345263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.345308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.345524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.345565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.345748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.345791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.345948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.345973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.346177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.346225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.346383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.346425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.346604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.346647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.346830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.346855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.347007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.347032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.347191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.347235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.347446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.347488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.347703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.347745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.347903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.347930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.348086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.348118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.348274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.348318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.348477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.348520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.348730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.348773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.348900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.348925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.349084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.349120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.349332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.349374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.349545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.349588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.349750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.349793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.349979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.350008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.350167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.350196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.350395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.350436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.350640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.350683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.350888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.350932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-07-25 00:14:17.351112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-07-25 00:14:17.351137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.351270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.351295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.351455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.351483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.351702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.351745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.351925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.351973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.352147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.352176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.352400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.352442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.352626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.352671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.352879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.352921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.353074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.353099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.353281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.353324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.353485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.353529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.353740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.353783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.353967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.353992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.354201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.354244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.354434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.354477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.354660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.354703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.354910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.354952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.355134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.355160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.355366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.355409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.355567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.355611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.355827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.355869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.356057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.356083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.356251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.356278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.356442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.356484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.356662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.356705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.356866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.356891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.357045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.357070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.357264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.357307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.357461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.357503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.357712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.357754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.357917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.357942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.358073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.358099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.358287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.358330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.358546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.358589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.358726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.358757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-07-25 00:14:17.358937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-07-25 00:14:17.358962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.359177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.359205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.359379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.359422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.359591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.359618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.359774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.359799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.359958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.359984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.360158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.360187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.360401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.360428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.360613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.360656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.360839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.360864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.360989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.361175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.361379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.361568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.361775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.361957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.361981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.362178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.362226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.362437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.362594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.362637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.362798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.362823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.362981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.363007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.363153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.363182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.363403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.363446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.363599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.363641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.363807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.363832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.363992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.364017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.364201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.364244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.364420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.364462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.364642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.364684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.364820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.364846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.365050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.365076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.365291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.365335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.365517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.365562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.365696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.365723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.365852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.365877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.366003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.366029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.366231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-07-25 00:14:17.366274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-07-25 00:14:17.366490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.366531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.366746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.366788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.366919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.366948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.367099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.367130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.367311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.367353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.367536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.367579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.367759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.367806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.367944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.367971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.368149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.368178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.368369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.368412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.368592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.368637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.368797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.368822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.368982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.369182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.369382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.369604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.369803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.369966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.369991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.370162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.370206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.370390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.370433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.370630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.370656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.370817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.370844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.370982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.371007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.371187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.371229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.371437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.371480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.371642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.371685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.371842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.371868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.372048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.372074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.372264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.372308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.372470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.372513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.372719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.372760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.372922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.372947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.373075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.373099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.373297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.373341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.373514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.373557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.373735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-07-25 00:14:17.373783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-07-25 00:14:17.373919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.373946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.374075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.374109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.374262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.374306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.374486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.374529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.374739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.374781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.374916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.374941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.375075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.375112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.375288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.375332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.375511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.375555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.375710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.375752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.375915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.375939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.376091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.376125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.376310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.376353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.376538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.376580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.376764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.376811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.376934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.376959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.377146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.377190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.377375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.377418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.377597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.377640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.377816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.377859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.378108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.378134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.378314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.378356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.378563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.378606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.378770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.378813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.378997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.379022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.379206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.379249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.379460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.379503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.379633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.379660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.379844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.379869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.380028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.380053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-07-25 00:14:17.380233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-07-25 00:14:17.380277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.380488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.380529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.380718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.380760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.380944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.380973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.381128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.381153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.381298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.381339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.381518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.381562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.381771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.381814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.381975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.382000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.382205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.382248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.382457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.382500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.382679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.382726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.382910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.382935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.383093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.383127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.383309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.383352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.383524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.383566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.383724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.383766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.383898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.383924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.384108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.384133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.384321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.384365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.384516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.384558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.384736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.384780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.384966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.384992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.385125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.385151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.385345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.385372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.385580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.385623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.385809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.385854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.385977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.386188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.386356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.386543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.386746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.386900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.386926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.387083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.387115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.387273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.387298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.387479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.387522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.387685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-07-25 00:14:17.387728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-07-25 00:14:17.387877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.387902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.388080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.388111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.388286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.388329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.388513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.388555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.388707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.388749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.388932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.388957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.389142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.389189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.389374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.389403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.389625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.389668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.389835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.389877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.390044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.390069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.390248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.390274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.390466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.390492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.390704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.390747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.390881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.390907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.391060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.391086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.391274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.391316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.391530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.391572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.391761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.391804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.391944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.391970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.392113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.392139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.392322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.392365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.392544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.392587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.392746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.392772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.392924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.392949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.393153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.393182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.393411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.393454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.393610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.393652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.393870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.393911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.394049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.394074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.394239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.394282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.394446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.394488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.394671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.394714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.394900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.394943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.395071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.395097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.395254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-07-25 00:14:17.395296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-07-25 00:14:17.395451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.395479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.395644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.395687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.395840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.395864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.396019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.396043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.396239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.396283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.396445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.396488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.396697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.396740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.396899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.396924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.397078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.397109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.397267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.397309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.397513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.397560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.397724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.397767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.397900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.397925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.398081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.398114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.398326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.398369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.398554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.398598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.398775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.398816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.398979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.399004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.399183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.399227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.399412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.399455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.399612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.399654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.399852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.399877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.400062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.400087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.400283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.400332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.400520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.400563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.400735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.400776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.400924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.400949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.401126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.401152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.401335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.401377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.401532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.401573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.401749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.401791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.401947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.401972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.402173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.402217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.402379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.402422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.402634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.402677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.402839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.402865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-07-25 00:14:17.403006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-07-25 00:14:17.403032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.403236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.403279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.403452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.403496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.403656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.403699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.403859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.403885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.404044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.404069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.404231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.404274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.404455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.404496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.404682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.404725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.404883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.404908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.405064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.405088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.405272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.405314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.405505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.405534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.405710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.405752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.405886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.405917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.406078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.406110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.406299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.406344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.406530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.406574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.406737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.406780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.406973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.406998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.407152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.407181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.407385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.407427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.407584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.407612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.407788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.407830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.407991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.408016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.408169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.408198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.408414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.408456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.408630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.408673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.408834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.408860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.409020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.409047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.409236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.409286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.409496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.409538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.409721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.409766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.409929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.409954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.410123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.410166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.410353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.410396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-07-25 00:14:17.410571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-07-25 00:14:17.410613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.410795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.410842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.410999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.411025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.411191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.411235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.411416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.411458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.411646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.411689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.411852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.411877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.412036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.412061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.412220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.412265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.412465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.412492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.412646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.412689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.412821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.412848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.413971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.413996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.414198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.414246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.414411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.414458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.414615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.414658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.414792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.414817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.414948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.414973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.415114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.415139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.415318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.415362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.415540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.415583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.415715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.415741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.415894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.415919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.416045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.416070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.416248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.416292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.416440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-07-25 00:14:17.416482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-07-25 00:14:17.416657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.416700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.416846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.416871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.417023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.417048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.417206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.417250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.417424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.417452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.417622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.417664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.417824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.417849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.418035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.418236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.418443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.418642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.418827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.418980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.419178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.419386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.419608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.419790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.419943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.419968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.420134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.420161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.420343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.420386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.420566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.420609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.420747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.420773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.420905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.420931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.421057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.421082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.421255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.421284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.421483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.421527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.421685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.421727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.421899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.421929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.422090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.422122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.422284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.422328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.422496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.422539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.422753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.422780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.422924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.422949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.423107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.423133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.423316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.423359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.423515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.423560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-07-25 00:14:17.423749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-07-25 00:14:17.423792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.423922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.423949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.424078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.424109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.424268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.424311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.424495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.424538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.424697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.424722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.424870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.425057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.425281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.425508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.425707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.425862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.425997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.426197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.426393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.426590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.426795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.426968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.426993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.427124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.427153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.427312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.427356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.427531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.427574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.427728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.427753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.427936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.427961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.428088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.428123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.428301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.428345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.428498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.428541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.428703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.428746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.428879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.428906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.429063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.429088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.429264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.429309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.429459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.429501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.429692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.429739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.429880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.429905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.430056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.430081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.430252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.430296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.430450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.430493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-07-25 00:14:17.430654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-07-25 00:14:17.430698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.430851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.430877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.431961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.431986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.432112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.432146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.432379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.432422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.432605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.432647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.432821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.432847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.432988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.433158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.433377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.433577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.433800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.433961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.433986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.434122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.434148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.434306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.434351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.434523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.434550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.434760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.434786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.434952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.434978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.435107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.435133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.435302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.435346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.435523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.435566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.435751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.435776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.435907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.435933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.436114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.436160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.436325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.436369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.436558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.436601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.436788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.436813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.436947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.436972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.437115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.437141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.437358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.437386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.437583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.437630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.437761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.437787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-07-25 00:14:17.437917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-07-25 00:14:17.437942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.438085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.438116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.438268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.438311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.438476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.438519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.438702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.438744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.438873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.438898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.439064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.439091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.439257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.439301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.439492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.439519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.439684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.439726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.439861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.439886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.440046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.440072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.440254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.440298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.440481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.440524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.440702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.440744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.440900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.440925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.441061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.441087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.441251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.441294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.441484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.441528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.441679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.441721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.441902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.441927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.442080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.442121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.442313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.442341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.442532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.442578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.442731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.442775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.442936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.442961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.443115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.443142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.443347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.443392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.443541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.443583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.443765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.443807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.443965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.443990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.444174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.444216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.444377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.444405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.444567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.444611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.444786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.444811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.444947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.444972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-07-25 00:14:17.445153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-07-25 00:14:17.445182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.445352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.445381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.445566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.445613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.445748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.445774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.445935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.445960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.446095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.446128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.446315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.446358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.446517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.446560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.446715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.446741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.446879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.446905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.447062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.447089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.447266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.447309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.447463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.447505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.447678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.447721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.447905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.447930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.448091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.448123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.448317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.448360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.448512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.448556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.448709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.448755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.448884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.448909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.449088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.449119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.449275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.449319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.449474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.449502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.449695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.449740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.449874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.449898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.450050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.450251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.450406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.450598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.450829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.450995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.451022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-07-25 00:14:17.451202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-07-25 00:14:17.451246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.451396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.451440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.451615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.451658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.451795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.451822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.452971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.452996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.453126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.453152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.453315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.453362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.453574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.453617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.453747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.453772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.453952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.453977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.454114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.454140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.454303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.454348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.454502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.454546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.454705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.454730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.454888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.454914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.455049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.455074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.455247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.455291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.455483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.455527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.455704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.455747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.455907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.455932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.456097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.456129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.456303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.456348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.456563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.456606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.456797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.456840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.456976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.457002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.457210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.457255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.457432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.457480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.457657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.457701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.457836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.457862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.458001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.458026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.458209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.458239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.458395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-07-25 00:14:17.458420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-07-25 00:14:17.458569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.458612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.458779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.458805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.458945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.458972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.459157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.459186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.459389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.459433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.459625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.459668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.459828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.459853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.460036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.460227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.460436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.460653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.460838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.460995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.461172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.461377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.461608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.461802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.461961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.461988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.462119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.462145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.462299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.462325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.462489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.462533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.462716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.462741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.462873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.462899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.463037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.463064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.463263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.463307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.463527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.463570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.463733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.463758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.463917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.463942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.464109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.464135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.464316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.464343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.464524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.464567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.464718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.464744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.464870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.464895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.465052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.465077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.465286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.465329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.465514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.465557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-07-25 00:14:17.465770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-07-25 00:14:17.465814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.465953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.465979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.466127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.466154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.466340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.466382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.466554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.466581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.466747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.466773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.466912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.466937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.467082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.467112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.467279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.467323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.467533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.467576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.467716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.467741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.467875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.467901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.468036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.468062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.468250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.468292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.468452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.468494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.468648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.468690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.468823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.468849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.469930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.469955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.470113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.470139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.470299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.470343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.470503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.470545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.470679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.470705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.470870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.470896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.471035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.471061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.471255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.471299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.471475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.471518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.471686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.471729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.471880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.471906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.472045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.472071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.472255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.472299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.472481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.472528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.923 qpair failed and we were unable to recover it. 00:34:11.923 [2024-07-25 00:14:17.472705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.923 [2024-07-25 00:14:17.472754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.472885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.472910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.473053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.473078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.473261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.473305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.473459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.473503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.473683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.473732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.473895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.473920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.474053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.474078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.474281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.474325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.474516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.474547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.474700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.474729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.474903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.474932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.475117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.475146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.475292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.475324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.475480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.475508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.475675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.475704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.475954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.475982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.476134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.476160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.476299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.476324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.476508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.476536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.476716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.476744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.476897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.476925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.477097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.477150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.477282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.477307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.477448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.477474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.477654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.477682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.477861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.477890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.478067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.478095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.478256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.478281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.478495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.478524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.478726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.478754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.478943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.478971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.479120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.479147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.479290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.479315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.479504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.479546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.479708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.479735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.479944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.924 [2024-07-25 00:14:17.479973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.924 qpair failed and we were unable to recover it. 00:34:11.924 [2024-07-25 00:14:17.480117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.480159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.480343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.480369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.480572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.480600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.480802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.480830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.481040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.481069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.481230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.481256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.481472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.481500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.481664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.481692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.481891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.482041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.482069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.482256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.482282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.482421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.482462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.482643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.482672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.482905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.482933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.483106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.483136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.483315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.483340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.483497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.483522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.483683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.483708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.483846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.483871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.484062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.484087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.484334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.484359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.484559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.484585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.484796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.484825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.484996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.485024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.485242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.485268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.485424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.485451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.485627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.485656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.485801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.485829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.486025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.486053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.486241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.486266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.486396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.925 [2024-07-25 00:14:17.486422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.925 qpair failed and we were unable to recover it. 00:34:11.925 [2024-07-25 00:14:17.486615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.486645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.486818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.486846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.486991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.487016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.487189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.487216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.487349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.487375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.487571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.487599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.487767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.487796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.487981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.488009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.488178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.488204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.488341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.488367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.488545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.488574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.488775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.488804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.489056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.489246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.489403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.489602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.489802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.489997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.490025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.490221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.490250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.490447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.490475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.490672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.490699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.490873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.490905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.491110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.491153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.491315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.491341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.491472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.491496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.491651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.491691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.491866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.491895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.492050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.492076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.492255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.492284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.492466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.492491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.492677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.492702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.492872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.492897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.493080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.493109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.493244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.493269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.493410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.493436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.493661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.926 [2024-07-25 00:14:17.493689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.926 qpair failed and we were unable to recover it. 00:34:11.926 [2024-07-25 00:14:17.493873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.493902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.494050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.494077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.494270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.494295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.494426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.494451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.494622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.494650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.494830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.494858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.495038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.495199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.495378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.495585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.495769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.495988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.496016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.496206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.496233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.496364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.496389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.496572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.496597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.496792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.496818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.497020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.497048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.497232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.497260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.497442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.497466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.497655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.497683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.497841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.497866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.498048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.498073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.498221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.498249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.498425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.498454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.498616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.498641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.498779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.498827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.499029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.499242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.499448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.499618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.499801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.499999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.500027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.500229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.500257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.500411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.500436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.500598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.500641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.500787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.500816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.927 [2024-07-25 00:14:17.500999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.927 [2024-07-25 00:14:17.501025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.927 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.501157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.501184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.501316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.501341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.501517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.501543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.501721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.501748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.501959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.501984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.502121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.502149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.502324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.502352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.502490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.502518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.502677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.502702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.502858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.502882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.503038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.503065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.503249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.503274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.503452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.503479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.503649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.503677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.503856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.503880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.504066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.504095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.504259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.504285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.504450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.504475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.504637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.504662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.504848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.504877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.505055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.505266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.505473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.505653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.505804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.505991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.506018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.506194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.506220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.506394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.506421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.506619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.506651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.506801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.506827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.506996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.507024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.507273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.507301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.507476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.507501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.507676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.507703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.507860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.507884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.508045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.928 [2024-07-25 00:14:17.508069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.928 qpair failed and we were unable to recover it. 00:34:11.928 [2024-07-25 00:14:17.508229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.508259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.508437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.508465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.508653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.508679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.508836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.508861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.508989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.509155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.509344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.509577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.509740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.509937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.509965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.510115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.510157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.510298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.510323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.510484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.510510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.510699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.510724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.510849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.510875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.511009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.511035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.511173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.511200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.511384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.511409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.511564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.511589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.511793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.511822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.512051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.512261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.512473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.512651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.512847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.512993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.513022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.513205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.513231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.513440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.513468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.513622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.513647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.513829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.513854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.513999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.514026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.514222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.514248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.514381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.514410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.514615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.514643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.514778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.514806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.929 [2024-07-25 00:14:17.514990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.929 [2024-07-25 00:14:17.515014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.929 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.515181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.515207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.515383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.515410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.515585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.515610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.515810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.515838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.516046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.516279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.516434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.516639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.516834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.516996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.517162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.517320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.517481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.517662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.517844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.517870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.518046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.518261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.518417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.518578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.518794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.518996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.519184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.519367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.519603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.519787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.519991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.520185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.520210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.520414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.520441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.520648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.520676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.520856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.520882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.930 [2024-07-25 00:14:17.521065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.930 [2024-07-25 00:14:17.521094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.930 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.521266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.521294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.521469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.521494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.521669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.521697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.521884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.521909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.522069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.522095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.522269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.522301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.522504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.522532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.522722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.522747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.522928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.522952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.523132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.523164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.523344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.523370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.523614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.523642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.523843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.523871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.524072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.524097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.524288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.524317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.524498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.524523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.524707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.524731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.524863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.524888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.525045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.525086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.525278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.525304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.525455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.525484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.525667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.525693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.525832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.525857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.526057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.526272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.526460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.526647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.526856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.526991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.527017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.527159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.527185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.527316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.527341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.527470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.527496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.527751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.527779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.527979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.528007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.528162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.528188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.528346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.931 [2024-07-25 00:14:17.528372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.931 qpair failed and we were unable to recover it. 00:34:11.931 [2024-07-25 00:14:17.528501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.528526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.528691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.528717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.528893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.528921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.529122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.529150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.529336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.529361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.529520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.529545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.529753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.529782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.529982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.530007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.530185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.530214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.530414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.530446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.530601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.530626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.530790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.530815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.530980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.531007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.531206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.531232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.531449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.531478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.531674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.531702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.531880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.531906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.532062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.532090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.532254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.532281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.532474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.532499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.532673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.532701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.532862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.532887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.533070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.533095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.533301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.533329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.533498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.533526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.533671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.533697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.533852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.533893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.534087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.534120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.534325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.534351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.534552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.534579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.534752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.534780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.932 qpair failed and we were unable to recover it. 00:34:11.932 [2024-07-25 00:14:17.535967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.932 [2024-07-25 00:14:17.535995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.536178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.536214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.536393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.536421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.536589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.536617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.536792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.536816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.536953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.536978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.537140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.537182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.537363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.537388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.537566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.537594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.537736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.537764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.537936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.537961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.538132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.538161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.538343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.538373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.538523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.538553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.538706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.538745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.538927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.538957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.539162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.539189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.539335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.539360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.539542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.539571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.539733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.539758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.539915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.539941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.540136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.540164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.540307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.540334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.540496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.540521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.540650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.540675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.540833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.540858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.541005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.541032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.541189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.541223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.541401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.541426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.541668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.541696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.541893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.541918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.542078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.542107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.542285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.542310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.542448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.542473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.542609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.542633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.542814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.542839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.933 [2024-07-25 00:14:17.543029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.933 [2024-07-25 00:14:17.543056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.933 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.543267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.543293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.543427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.543452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.543618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.543643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.543821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.543860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.544966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.544992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.545178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.545203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.545366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.545391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.545560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.545589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.545740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.545768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.545940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.545969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.546165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.546191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.546355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.546401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.546582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.546609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.546781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.546810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.546991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.547020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.547224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.547253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.547408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.547434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.547640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.547667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.547814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.547843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.548017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.548045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.548234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.548260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.548443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.548471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.548673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.548701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.548872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.548899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.549074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.549110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.549332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.934 [2024-07-25 00:14:17.549357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.934 qpair failed and we were unable to recover it. 00:34:11.934 [2024-07-25 00:14:17.549539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.549568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.549741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.549769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.549957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.549985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.550184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.550209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.550370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.550412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.550617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.550642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.550818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.550846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.551047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.551075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.551275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.551300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.551506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.551533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.551705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.551733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.551957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.551984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.552173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.552200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.552337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.552362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.552572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.552600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.552739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.552767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.552974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.553146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.553338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.553519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.553727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.553923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.553950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.554174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.554200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.554340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.554366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.554555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.554583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.554727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.554759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.554932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.554960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.555147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.555173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.555321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.555347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.555553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.555581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.555756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.555784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.555935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.555963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.556131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.556161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.556318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.556343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.556544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.556572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.556777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.556805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.935 qpair failed and we were unable to recover it. 00:34:11.935 [2024-07-25 00:14:17.556981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.935 [2024-07-25 00:14:17.557009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.557199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.557230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.557419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.557447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.557625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.557654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.557802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.557830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.558042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.558070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.558233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.558259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.558410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.558451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.558668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.558697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.558869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.558896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.559079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.559114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.559291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.559317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.559525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.559553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.559716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.559745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.559918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.559946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.560167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.560193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.560343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.560408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.560654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.560684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.560864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.560893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.561095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.561146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.561305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.561330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.561495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.561522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.561725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.561753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.561956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.561985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.562181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.562207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.562391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.562419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.562595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.562623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.562794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.562823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.562969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.562997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.563172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.563198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.563404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.563432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.563618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.563659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.563907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.564080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.564114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.564292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.564317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.564498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.564523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.936 qpair failed and we were unable to recover it. 00:34:11.936 [2024-07-25 00:14:17.564737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.936 [2024-07-25 00:14:17.564765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.564943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.564971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.565131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.565156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.565336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.565361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.565571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.565599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.565779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.565804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.565952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.565980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.566161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.566191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.566355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.566380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.566530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.566560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.566740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.566768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.566950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.566975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.567198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.567224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.567346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.567371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.567526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.567552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.567757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.567785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.567940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.567966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.568098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.568130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.568314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.568338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.568511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.568536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.568663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.568688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.568823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.568864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.569004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.569031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.569241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.569267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.569470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.569498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.569668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.569695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.569901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.569926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.570108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.570137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.570314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.570339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.570532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.570557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.570733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.570760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.570926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.570954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.571110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.571136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.571294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.571319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.571512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.571540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.571722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.571747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.571904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.937 [2024-07-25 00:14:17.571929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.937 qpair failed and we were unable to recover it. 00:34:11.937 [2024-07-25 00:14:17.572082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.572132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.572308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.572333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.572506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.572534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.572712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.572740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.572889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.572914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.573042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.573082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.573294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.573319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.573449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.573474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.573635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.573659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.573814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.573838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.574915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.574940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.575098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.575128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.575317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.575342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.575517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.575545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.575748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.575773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.575950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.575977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.576137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.576165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.576350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.576376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.576514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.576540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.576698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.576740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.576900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.576925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.577059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.577106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.577291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.577317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.577470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.577495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.577673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.577701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.577907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.577932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.578174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.578215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.578412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.578439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.578618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.578646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.578828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.578853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.579010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.579052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.579249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.579278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.579449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.579474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.579676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.579709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.938 qpair failed and we were unable to recover it. 00:34:11.938 [2024-07-25 00:14:17.579886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.938 [2024-07-25 00:14:17.579913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.580074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.580098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.580322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.580350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.580493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.580521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.580736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.580760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.580945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.580970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.581095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.581126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.581297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.581322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.581452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.581476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.581677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.581705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.581885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.581910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.582084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.582119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.582274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.582302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.582519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.582545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.582720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.582748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.582930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.582955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.583113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.583139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.583274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.583299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.583461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.583486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.583670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.583695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.583849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.583874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.584947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.584972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.585133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.585179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.585344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.585369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.585575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.585602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.585777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.585805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.585977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.586002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.586164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.586189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.586359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.586384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.586571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.939 [2024-07-25 00:14:17.586596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.939 qpair failed and we were unable to recover it. 00:34:11.939 [2024-07-25 00:14:17.586777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.586804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.586994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.587033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.587251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.587278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.587453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.587481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.587682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.587710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.587891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.587917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.588120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.588149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.588298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.588328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.588530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.588555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.588720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.588757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.588930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.588961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.589148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.589175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.589354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.589382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.589558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.589587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.589763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.589788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.589942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.589966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.590186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.590215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:11.940 [2024-07-25 00:14:17.590408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.940 [2024-07-25 00:14:17.590433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:11.940 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.590598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.590645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.590878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.590926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.591192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.591231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.591378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.591404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.591611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.591654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.591811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.591855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.592015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.592041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.592182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.592209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.592382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.592411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.592610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.592653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.592846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.592872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.593054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.593080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.593243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.593269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.593423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.593448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.593653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.593687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.593834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.593863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.594036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.594064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.219 [2024-07-25 00:14:17.594236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.219 [2024-07-25 00:14:17.594265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.219 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.594438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.594466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.594647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.594688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.594889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.594917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.595099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.595129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.595284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.595309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.595470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.595499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.595673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.595700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.595873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.595901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.596056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.596080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.596241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.596284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.596476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.596511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.596766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.597029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.597062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.597275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.597305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.597513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.597542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.597741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.597785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.597966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.597994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.598170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.598196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.598357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.598383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.598438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd20f0 (9): Bad file descriptor 00:34:12.220 [2024-07-25 00:14:17.598700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.598756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.599051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.599120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.599288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.599316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.599482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.599507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.599782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.599834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.600021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.600046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.600208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.600234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.600416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.600458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.600672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.600715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.600875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.600917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.601085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.601116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.601305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.601330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.601515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.601559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.601800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.601850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.601989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.602014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.602198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.220 [2024-07-25 00:14:17.602242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.220 qpair failed and we were unable to recover it. 00:34:12.220 [2024-07-25 00:14:17.602425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.602467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.602656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.602701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.602884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.602926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.603088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.603119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.603329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.603376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.603522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.603565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.603752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.603794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.603926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.603951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.604111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.604137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.604302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.604346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.604524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.604571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.604757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.604799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.604961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.604986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.605161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.605190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.605360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.605408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.605705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.605756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.605917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.605943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.606154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.606183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.606402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.606445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.606658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.606701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.606895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.606937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.607129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.607155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.607335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.607378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.607533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.607576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.607782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.607810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.607979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.608004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.608154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.608184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.608415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.608458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.608647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.608689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.608842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.608887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.609025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.609051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.609259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.609288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.609485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.609528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.609709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.609755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.609914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.609941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.610108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.221 [2024-07-25 00:14:17.610134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.221 qpair failed and we were unable to recover it. 00:34:12.221 [2024-07-25 00:14:17.610327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.610354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.610538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.610581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.610740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.610785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.610918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.610945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.611127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.611153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.611316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.611343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.611537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.611562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.611722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.611750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.611898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.611924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.612078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.612109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.612257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.612286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.612489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.612532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.612847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.612898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.613085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.613120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.613277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.613320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.613503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.613545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.613718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.613761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.613918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.613943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.614107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.614138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.614316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.614360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.614571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.614613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.614790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.614834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.614959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.614984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.615167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.615210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.615392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.615435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.615611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.615653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.615808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.615833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.615998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.616023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.616234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.616278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.616459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.616489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.616644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.616672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.616844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.616872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.617077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.617115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.617273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.617298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.617481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.617509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.617681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.617708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.222 [2024-07-25 00:14:17.617879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.222 [2024-07-25 00:14:17.617907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.222 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.618122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.618150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.618317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.618360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.618542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.618586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.618750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.618793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.618961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.618986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.619164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.619193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.619407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.619434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.619606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.619649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.619819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.619868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.620032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.620057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.620245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.620289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.620500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.620542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.620696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.620739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.620925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.620950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.621123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.621149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.621333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.621378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.621568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.621597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.621794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.621837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.622023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.622048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.622227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.622271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.622439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.622482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.622659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.622701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.622894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.622919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.623052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.623078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.623309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.623352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.623550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.623577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.623785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.623828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.623960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.623987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.624155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.624182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.624357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.624385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.624564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.624607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.624815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.624859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.625014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.625040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.625218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.625262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.625479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.625708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.223 [2024-07-25 00:14:17.625752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.223 qpair failed and we were unable to recover it. 00:34:12.223 [2024-07-25 00:14:17.625889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.625914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.626076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.626108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.626287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.626330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.626509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.626551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.626731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.626773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.626935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.626959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.627119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.627145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.627367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.627409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.627563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.627607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.627793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.627836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.628001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.628027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.628201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.628244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.628409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.628456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.628613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.628655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.628793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.628818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.629003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.629028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.629177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.629219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.629398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.629440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.629620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.629663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.629823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.629848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.630035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.630060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.630251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.630295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.630474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.630518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.630687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.630731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.630891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.630917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.631076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.631110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.224 [2024-07-25 00:14:17.631292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.224 [2024-07-25 00:14:17.631319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.224 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.631500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.631543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.631726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.631772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.631967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.631993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.632174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.632218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.632401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.632444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.632654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.632697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.633049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.633074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.633281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.633308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.633489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.633531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.633684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.633726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.633890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.633915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.634053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.634080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.634279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.634322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.634532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.634575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.634769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.634812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.634950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.634975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.635153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.635182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.635377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.635419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.635600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.635649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.635808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.635833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.635990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.636015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.636200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.636242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.636440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.636467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.636673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.636716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.636901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.636930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.637089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.637127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.637299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.637342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.637525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.637569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.637749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.637791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.637952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.637977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.638158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.638187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.638353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.638396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.638548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.638576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.638771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.638816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.638999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.225 [2024-07-25 00:14:17.639024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.225 qpair failed and we were unable to recover it. 00:34:12.225 [2024-07-25 00:14:17.639180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.639223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.639413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.639455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.639642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.639685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.639856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.639882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.640010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.640035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.640233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.640277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.640433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.640475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.640637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.640679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.640844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.640869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.641062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.641087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.641258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.641300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.641431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.641457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.641644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.641687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.641874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.641899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.642064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.642089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.642275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.642318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.642502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.642545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.642731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.642773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.642932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.642959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.643118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.643144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.643320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.643363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.643518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.643562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.643696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.643722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.643873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.643899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.644031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.644056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.644252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.644299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.644451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.644494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.644677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.644721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.644888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.644915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.645073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.645117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.645256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.645282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.645469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.645494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.645641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.645683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.645817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.645844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.646028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.646054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.646245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.226 [2024-07-25 00:14:17.646290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.226 qpair failed and we were unable to recover it. 00:34:12.226 [2024-07-25 00:14:17.646470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.646515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.646713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.646739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.646879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.646905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.647095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.647126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.647305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.647348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.647535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.647577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.647808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.647966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.647991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.648120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.648146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.648304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.648346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.648535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.648576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.648727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.648769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.648953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.648978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.649185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.649228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.649401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.649447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.649636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.649664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.649841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.649867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.649995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.650021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.650200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.650243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.650420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.650461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.650652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.650695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.650858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.650883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.651024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.651050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.651232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.651274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.651456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.651501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.651708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.651751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.651939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.651964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.652125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.652151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.652336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.652378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.652552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.652594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.652775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.652800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.652958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.652983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.653116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.653141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.653351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.653398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.653619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.653663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.653820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.227 [2024-07-25 00:14:17.653847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.227 qpair failed and we were unable to recover it. 00:34:12.227 [2024-07-25 00:14:17.653997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.654022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.654204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.654246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.654465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.654508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.654729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.654772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.654932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.654957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.655114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.655141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.655314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.655357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.655514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.655557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.655763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.655806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.655939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.655965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.656125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.656151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.656338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.656385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.656568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.656611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.656768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.656794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.656975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.657153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.657376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.657606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.657964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.657990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.658118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.658144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.658310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.658357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.658548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.658590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.658748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.658774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.658935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.658960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.659123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.659148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.659326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.659370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.659530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.659573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.659737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.659762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.659944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.659969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.660124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.660149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.660315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.660357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.660538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.660580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.660741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.660766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.660921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.660946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.661082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.661113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.228 [2024-07-25 00:14:17.661294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.228 [2024-07-25 00:14:17.661337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.228 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.661553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.661599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.661755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.661799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.661962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.661987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.662144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.662172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.662373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.662417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.662628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.662670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.662854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.662879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.663061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.663086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.663253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.663296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.663476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.663521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.663708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.663752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.663906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.663932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.664092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.664125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.664312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.664355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.664548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.664591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.664775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.664803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.664980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.665006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.665190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.665234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.665423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.665467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.665649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.665692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.665880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.665905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.666069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.666094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.666257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.666300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.666502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.666544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.666759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.666800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.666939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.666964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.667096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.667128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.667371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.667413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.667592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.667623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.667824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.667853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.229 qpair failed and we were unable to recover it. 00:34:12.229 [2024-07-25 00:14:17.668031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.229 [2024-07-25 00:14:17.668056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.668234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.668260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.668423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.668466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.668694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.668722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.668898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.668925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.669152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.669178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.669367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.669393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.669597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.669625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.669782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.669809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.670004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.670031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.670198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.670230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.670402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.670430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.670623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.670652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.670831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.670861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.671016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.671041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.671229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.671266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.671461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.671489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.671639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.671668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.671834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.671863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.672007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.672036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.672200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.672227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.672386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.672411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.672591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.672619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.672838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.672866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.673049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.673078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.673265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.673290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.673480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.673507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.673682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.673709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.673909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.673936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.674116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.674143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.674325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.674350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.674526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.674554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.674735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.674776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.674975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.675003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.675163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.675189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.675318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.675343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.230 [2024-07-25 00:14:17.675534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.230 [2024-07-25 00:14:17.675562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.230 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.675764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.675809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.676099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.676159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.676333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.676371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.676589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.676634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.676822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.676864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.677004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.677029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.677195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.677221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.677380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.677423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.677621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.677648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.677798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.677842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.678035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.678060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.678250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.678293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.678467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.678509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.678711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.678758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.678920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.678946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.679112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.679138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.679302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.679345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.679527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.679569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.679753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.679781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.679957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.679982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.680142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.680168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.680321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.680364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.680523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.680568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.680815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.680866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.681051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.681076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.681213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.681240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.681403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.681446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.681642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.681670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.681847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.681872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.682033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.682058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.682245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.682288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.682482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.682508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.682659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.682703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.682895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.682920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.683078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.683109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.683318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.231 [2024-07-25 00:14:17.683347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.231 qpair failed and we were unable to recover it. 00:34:12.231 [2024-07-25 00:14:17.683544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.683590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.683781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.683809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.683987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.684014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.684185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.684229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.684388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.684431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.684616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.684644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.684869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.684912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.685069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.685094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.685275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.685318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.685527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.685570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.685754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.685799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.685988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.686014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.686196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.686239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.686429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.686472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.686716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.686766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.686927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.686952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.687139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.687181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.687367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.687415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.687622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.687665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.687876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.687918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.688078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.688109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.688273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.688316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.688498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.688544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.688738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.688787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.688947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.688973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.689133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.689159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.689366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.689409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.689583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.689626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.689815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.689858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.690018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.690043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.690222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.690251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.690457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.690501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.690710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.690752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.690910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.690935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.691096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.691127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.691316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.691358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.232 [2024-07-25 00:14:17.691538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.232 [2024-07-25 00:14:17.691581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.232 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.691756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.691798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.691938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.691963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.692120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.692146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.692327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.692371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.692516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.692558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.692754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.692780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.692923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.692948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.693160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.693189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.693399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.693428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.693596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.693638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.693796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.693822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.693979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.694004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.694211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.694254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.694426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.694473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.694648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.694691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.694854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.694880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.695037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.695063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.695250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.695293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.695491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.695519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.695719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.695761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.695919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.695950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.696131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.696174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.696351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.696393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.696601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.696644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.696782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.696807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.696964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.696989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.697168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.697211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.697377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.697420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.697628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.697671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.697831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.697856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.698015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.698040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.698219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.233 [2024-07-25 00:14:17.698260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.233 qpair failed and we were unable to recover it. 00:34:12.233 [2024-07-25 00:14:17.698439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.698482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.698649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.698692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.698888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.698913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.699041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.699066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.699228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.699271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.699490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.699533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.699740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.699783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.699969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.699993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.700162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.700191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.700387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.700430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.700617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.700660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.700835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.700880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.701045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.701070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.701287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.701330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.701508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.701555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.701745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.701788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.701924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.701951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.702135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.702161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.702332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.702375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.702524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.702567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.702774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.702817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.702956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.702982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.703156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.703185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.703364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.703390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.703549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.703591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.703753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.703779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.703935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.703960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.704132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.704175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.704331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.704374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.704556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.704599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.704732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.704757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.704914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.704940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.705099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.705134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.705320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.705366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.234 [2024-07-25 00:14:17.705558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.234 [2024-07-25 00:14:17.705584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.234 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.705798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.705841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.706003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.706029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.706215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.706259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.706468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.706510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.706699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.706742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.706876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.706902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.707094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.707127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.707290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.707332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.707517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.707559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.707742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.707785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.707911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.707937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.708121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.708160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.708351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.708394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.708570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.708598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.708772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.708800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.708963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.708991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.709178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.709204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.709352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.709379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.709585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.709613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.709766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.709794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.709996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.710040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.710209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.710236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.710419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.710462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.710644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.710689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.710864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.710907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.711076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.711110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.711320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.711364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.711521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.711564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.711745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.711789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.711949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.711975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.712151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.712179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.712366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.712409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.712588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.712631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.712766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.712791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.712930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.712955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.713116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.235 [2024-07-25 00:14:17.713143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.235 qpair failed and we were unable to recover it. 00:34:12.235 [2024-07-25 00:14:17.713291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.713334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.713537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.713580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.713716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.713742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.713899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.713924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.714083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.714114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.714291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.714334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.714514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.714560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.714720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.714762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.714945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.714970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.715145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.715173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.715367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.715410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.715572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.715615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.715803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.715828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.715985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.716160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.716395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.716586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.716791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.716952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.716977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.717106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.717132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.717342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.717386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.717568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.717610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.717796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.717821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.717959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.717985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.718125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.718157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.718379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.718421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.718607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.718655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.718819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.718845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.719005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.719031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.719194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.719237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.719391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.719434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.719617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.719660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.719818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.719843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.720000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.720025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.720181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.720223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.720408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.720450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.236 [2024-07-25 00:14:17.720617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.236 [2024-07-25 00:14:17.720660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.236 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.720814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.720839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.721008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.721033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.721193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.721239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.721400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.721444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.721629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.721671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.721829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.721854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.722037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.722062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.722256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.722300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.722482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.722525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.722711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.722753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.722919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.722945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.723141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.723170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.723333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.723379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.723564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.723607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.723811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.723851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.724080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.724124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.724330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.724359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.724607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.724639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.724810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.724843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.725053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.725082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.725255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.725293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.725475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.725519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.725665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.725708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.726019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.726073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.726247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.726274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.726460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.726502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.726694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.726737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.726924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.726955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.727115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.727158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.727299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.727343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.727500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.727541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.727731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.727773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.727931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.727956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.728143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.728188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.728396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.728439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.728727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.237 [2024-07-25 00:14:17.728791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.237 qpair failed and we were unable to recover it. 00:34:12.237 [2024-07-25 00:14:17.728973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.728998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.729155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.729198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.729389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.729431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.729612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.729658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.729834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.729876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.730012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.730038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.730229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.730272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.730422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.730466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.730654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.730697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.730878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.730903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.731083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.731119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.731304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.731347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.731498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.731526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.731722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.731750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.731894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.731919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.732079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.732112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.732294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.732336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.732576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.732618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.732835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.732879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.733058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.733083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.733245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.733271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.733444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.733471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.733620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.733662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.733922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.733968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.734123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.734149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.734341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.734367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.734614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.734666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.734851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.734894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.735060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.735085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.735270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.735312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.238 [2024-07-25 00:14:17.735494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.238 [2024-07-25 00:14:17.735538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.238 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.735716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.735766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.735951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.735977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.736155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.736198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.736379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.736422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.736603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.736648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.736827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.736870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.737007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.737032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.737191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.737234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.737413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.737456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.737663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.737705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.737834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.737859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.738019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.738044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.738236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.738279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.738453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.738495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.738654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.738697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.738859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.738884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.739041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.739066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.739252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.739296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.739446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.739492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.739699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.739742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.739905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.739931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.740090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.740124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.740310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.740352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.740533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.740577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.740761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.740804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.740988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.741014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.741201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.741245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.741458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.741500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.741743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.741785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.741946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.741971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.742148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.742177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.742364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.742411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.742621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.742663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.742819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.742843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.742997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.743022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.743246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.239 [2024-07-25 00:14:17.743291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.239 qpair failed and we were unable to recover it. 00:34:12.239 [2024-07-25 00:14:17.743475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.743517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.743696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.743738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.743894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.743919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.744113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.744147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.744344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.744375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.744554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.744597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.744788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.744830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.744990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.745015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.745197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.745241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.745427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.745470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.745654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.745697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.745839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.745865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.746024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.746049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.746229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.746271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.746452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.746494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.746676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.746704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.746884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.746909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.747971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.747998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.748204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.748247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.748405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.748448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.748596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.748638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.748789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.748814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.748999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.749024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.749203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.749231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.749398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.749441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.749619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.749660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.749847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.749873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.750034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.750059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.750218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.750261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.750437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.750479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.750658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.750701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.240 [2024-07-25 00:14:17.750857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.240 [2024-07-25 00:14:17.750882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.240 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.751065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.751091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.751318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.751361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.751548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.751592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.751750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.751791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.751957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.751982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.752156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.752185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.752414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.752456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.752672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.752714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.752855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.752881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.753017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.753043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.753196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.753240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.753422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.753465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.753655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.753682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.753843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.753869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.754035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.754061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.754249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.754293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.754444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.754488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.754703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.754745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.754909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.754934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.755069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.755096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.755253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.755298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.755513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.755556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.755736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.755780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.755962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.755987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.756165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.756209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.756391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.756434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.756603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.756645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.756809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.756833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.757937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.757963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.758125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.758155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.758340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.241 [2024-07-25 00:14:17.758384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.241 qpair failed and we were unable to recover it. 00:34:12.241 [2024-07-25 00:14:17.758556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.758601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.758793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.758818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.758974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.758999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.759202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.759248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.759427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.759470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.759650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.759692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.759849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.759874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.760031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.760057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.760270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.760313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.760495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.760540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.760724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.760766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.760923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.760948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.761160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.761204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.761368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.761411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.761606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.761648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.761807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.761833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.761992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.762018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.762201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.762244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.762425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.762468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.762630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.762672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.762833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.762858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.763041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.763066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.763245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.763288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.763474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.763516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.763724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.763752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.763958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.763984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.764121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.764147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.764306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.764334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.764506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.764549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.764737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.764780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.764934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.764959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.765158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.765187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.765375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.765418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.765579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.765622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.765782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.765807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.765945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.765971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.242 qpair failed and we were unable to recover it. 00:34:12.242 [2024-07-25 00:14:17.766108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.242 [2024-07-25 00:14:17.766134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.766322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.766366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.766517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.766563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.766722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.766748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.766928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.766954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.767158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.767202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.767414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.767456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.767642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.767686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.767840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.767865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.768022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.768046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.768233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.768276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.768455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.768498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.768705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.768747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.768878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.768904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.769089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.769120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.769272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.769316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.769472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.769515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.769671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.769713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.769870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.769895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.770079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.770110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.770299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.770341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.770521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.770564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.770743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.770786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.770926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.770952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.771134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.771177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.771391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.771433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.771657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.771699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.771851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.771894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.772082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.772113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.772271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.772314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.772495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.772537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.772698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.243 [2024-07-25 00:14:17.772741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.243 qpair failed and we were unable to recover it. 00:34:12.243 [2024-07-25 00:14:17.772926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.772952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.773131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.773174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.773347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.773389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.773569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.773613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.773787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.773829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.773988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.774013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.774200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.774243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.774420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.774464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.774643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.774686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.774842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.774867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.775002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.775031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.775210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.775254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.775403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.775446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.775622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.775662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.775820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.775847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.776032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.776057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.776267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.776310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.776504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.776533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.776753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.776795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.776981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.777160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.777390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.777583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.777765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.777944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.777969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.778167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.778195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.778400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.778429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.778615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.778641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.778797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.778823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.778985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.779178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.779334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.779535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.779726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.779877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.779902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.780086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.244 [2024-07-25 00:14:17.780119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.244 qpair failed and we were unable to recover it. 00:34:12.244 [2024-07-25 00:14:17.780329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.780357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.780528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.780570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.780739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.780781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.780939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.780964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.781118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.781164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.781345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.781390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.781655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.781814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.781841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.781976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.782002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.782187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.782230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.782408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.782450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.782627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.782670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.782807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.782833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.782991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.783017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.783174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.783222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.783402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.783445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.783667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.783709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.783871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.783896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.784079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.784111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.784274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.784317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.784524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.784565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.784745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.784793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.784929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.784955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.785089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.785123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.785280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.785323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.785534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.785577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.785718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.785744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.785899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.785924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.786054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.786080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.786248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.786290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.786497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.786540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.786744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.786787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.786942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.786968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.787132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.787159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.787368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.787412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.787586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.787631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.245 [2024-07-25 00:14:17.787823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.245 [2024-07-25 00:14:17.787848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.245 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.788007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.788034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.788213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.788257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.788438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.788482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.788658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.788700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.788865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.788900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.789027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.789053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.789261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.789304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.789498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.789540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.789718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.789760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.789894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.789920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.790077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.790107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.790313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.790355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.790527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.790554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.790741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.790783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.790972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.790997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.791136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.791161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.791348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.791390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.791571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.791622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.791791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.791816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.791976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.792001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.792175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.792219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.792409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.792451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.792630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.792673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.792833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.792859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.793053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.793210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.793439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.793665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.793850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.793981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.794144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.794351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.794547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.794771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.794974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.794999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.795158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.246 [2024-07-25 00:14:17.795186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.246 qpair failed and we were unable to recover it. 00:34:12.246 [2024-07-25 00:14:17.795411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.795454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.795603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.795645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.795805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.795832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.795991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.796017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.796228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.796271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.796477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.796521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.796730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.796773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.796909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.796936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.797078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.797111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.797333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.797377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.797538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.797565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.797760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.797803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.797963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.797987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.798122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.798148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.798336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.798390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.798575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.798619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.798774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.798818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.798952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.798978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.799160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.799190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.799366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.799409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.799589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.799633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.799817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.799847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.800004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.800029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.800210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.800239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.800434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.800478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.800660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.800707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.800892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.800917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.801053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.801078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.801251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.801295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.801507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.801550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.801763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.801805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.801967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.801992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.802172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.802216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.802401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.802444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.802591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.802634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.802811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.802837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.247 qpair failed and we were unable to recover it. 00:34:12.247 [2024-07-25 00:14:17.803021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.247 [2024-07-25 00:14:17.803047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.803196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.803240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.803423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.803466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.803614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.803657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.803814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.803839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.804021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.804046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.804268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.804444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.804488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.804639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.804683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.804838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.804864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.805019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.805045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.805211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.805255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.805473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.805516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.805705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.805747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.805933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.805959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.806091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.806130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.806325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.806352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.806528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.806572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.806760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.806804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.806966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.806991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.807125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.807151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.807333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.807376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.807551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.807599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.807769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.807796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.807957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.807983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.808163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.808210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.808416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.808458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.808639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.808684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.808842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.808868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.809032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.809057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.809249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.809296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.248 [2024-07-25 00:14:17.809460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.248 [2024-07-25 00:14:17.809501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.248 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.809656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.809700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.809859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.809886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.810072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.810098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.810290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.810333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.810534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.810576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.810788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.810830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.811028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.811054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.811221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.811265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.811456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.811484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.811666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.811709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.811871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.811898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.812057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.812083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.812306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.812350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.812532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.812581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.812793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.812835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.812988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.813013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.813195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.813237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.813384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.813427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.813612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.813654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.813844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.813885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.814058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.814084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.814273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.814316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.814493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.814536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.814722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.814765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.814930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.814955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.815128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.815179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.815360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.815402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.815589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.815632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.815849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.815892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.816030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.816056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.816241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.816285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.816495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.816537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.816715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.816760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.816924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.816954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.817140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.817183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.817368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.249 [2024-07-25 00:14:17.817396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.249 qpair failed and we were unable to recover it. 00:34:12.249 [2024-07-25 00:14:17.817591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.817634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.817796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.817839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.817994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.818019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.818179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.818206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.818391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.818434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.818618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.818664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.818799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.818825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.819008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.819033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.819231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.819258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.819443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.819490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.819673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.819698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.819864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.819890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.820022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.820048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.820238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.820282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.820494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.820537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.820722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.820764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.820928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.820953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.821177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.821222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.821429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.821471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.821684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.821862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.821888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.822045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.822071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.822270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.822297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.822482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.822525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.822703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.822749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.822912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.822937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.823095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.823127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.823322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.823348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.823557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.823598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.823805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.823848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.824027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.824053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.824214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.824240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.824388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.824431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.824617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.824661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.824816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.824859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.825017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.825042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.250 qpair failed and we were unable to recover it. 00:34:12.250 [2024-07-25 00:14:17.825248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.250 [2024-07-25 00:14:17.825292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.825456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.825503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.825686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.825714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.825888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.825913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.826063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.826088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.826275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.826320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.826476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.826518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.826700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.826743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.826895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.826920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.827079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.827110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.827271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.827315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.827501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.827544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.827723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.827765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.827923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.827948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.828131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.828159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.828339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.828381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.828558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.828603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.828812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.828853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.829017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.829043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.829220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.829262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.829475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.829518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.829703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.829745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.829927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.829953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.830135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.830161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.830315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.830358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.830547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.830589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.830769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.830813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.830967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.830993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.831179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.831223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.831375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.831421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.831601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.831648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.831837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.831862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.832018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.832044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.832218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.832260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.832442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.832486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.832699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.832741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.832921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.251 [2024-07-25 00:14:17.832946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.251 qpair failed and we were unable to recover it. 00:34:12.251 [2024-07-25 00:14:17.833146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.833172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.833344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.833387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.833548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.833592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.833743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.833785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.833941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.833972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.834113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.834139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.834351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.834394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.834558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.834601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.834775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.834818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.835006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.835031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.835232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.835275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.835435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.835478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.835650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.835692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.835851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.835876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.836038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.836063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.836258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.836300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.836477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.836520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.836674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.836703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.836858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.836885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.837047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.837073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.837265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.837308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.837460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.837503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.837699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.837727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.837894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.837919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.838098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.838131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.838310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.838353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.838557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.838602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.838808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.838851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.839016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.839041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.839255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.839297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.839504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.839546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.839769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.839814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.839948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.839973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.840153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.840181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.840377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.840420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.840630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.252 [2024-07-25 00:14:17.840672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.252 qpair failed and we were unable to recover it. 00:34:12.252 [2024-07-25 00:14:17.840879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.840922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.841055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.841082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.841219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.841246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.841429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.841471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.841632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.841674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.841831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.841858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.842016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.842042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.842216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.842260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.842439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.842486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.842670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.842713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.842850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.842876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.843033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.843058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.843245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.843288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.843483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.843509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.843719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.843763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.843949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.843974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.844126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.844152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.844315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.844358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.844565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.844607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.844738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.844764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.844948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.844974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.845129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.845157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.845344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.845387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.845588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.845631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.845764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.845789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.845952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.845978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.846112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.846138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.846345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.846373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.846595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.846638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.846818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.846860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.847046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.847072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.253 [2024-07-25 00:14:17.847261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.253 [2024-07-25 00:14:17.847305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.253 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.847488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.847530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.847733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.847759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.847910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.847935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.848093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.848125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.848312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.848355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.848572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.848615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.848806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.848848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.849005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.849030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.849244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.849287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.849491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.849535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.849746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.849789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.849952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.849977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.850153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.850182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.850408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.850450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.850663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.850707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.850897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.850940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.851077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.851113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.851326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.851369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.851544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.851587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.851761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.851804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.851968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.851994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.852173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.852217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.852392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.852434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.852624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.852667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.852851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.852893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.853049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.853075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.853284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.853328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.853487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.853531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.853739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.853783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.853943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.853968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.854155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.854184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.854381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.854424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.854603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.854646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.854801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.854827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.854988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.855013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.855194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.254 [2024-07-25 00:14:17.855238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.254 qpair failed and we were unable to recover it. 00:34:12.254 [2024-07-25 00:14:17.855415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.855457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.855638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.855681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.855824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.855852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.856009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.856035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.856229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.856273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.856491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.856534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.856693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.856736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.856883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.856909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.857067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.857093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.857269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.857312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.857489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.857533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.857717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.857760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.857922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.857947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.858106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.858133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.858292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.858338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.858488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.858529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.858710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.858753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.858912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.858939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.859094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.859126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.859314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.859357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.859514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.859558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.859743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.859786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.859947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.859973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.860115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.860141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.860329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.860372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.860540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.860582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.860792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.860835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.861021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.861046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.861197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.861239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.861445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.861488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.861647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.861691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.861907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.861949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.862152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.862178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.862383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.862411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.862601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.862648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.862809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.255 [2024-07-25 00:14:17.862835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.255 qpair failed and we were unable to recover it. 00:34:12.255 [2024-07-25 00:14:17.863018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.863043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.863186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.863213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.863425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.863468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.863649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.863690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.863847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.863873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.864029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.864054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.864214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.864257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.864410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.864453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.864606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.864648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.864829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.864872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.865014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.865040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.865264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.865312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.865481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.865525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.865732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.865775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.865959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.865984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.866157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.866186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.866393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.866437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.866650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.866692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.866853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.866878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.867056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.867081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.867271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.867317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.867494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.867540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.867739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.867780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.867971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.867997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.868201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.868246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.868399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.868441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.868650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.868693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.868880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.869016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.869042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.869226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.869271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.869446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.869490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.869711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.869737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.869899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.869925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.870051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.870076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.870273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.870316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.870456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.870482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.256 [2024-07-25 00:14:17.870639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.256 [2024-07-25 00:14:17.870685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.256 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.870879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.870904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.871048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.871073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.871282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.871326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.871483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.871526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.871708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.871752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.871937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.871962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.872093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.872129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.872341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.872385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.872573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.872615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.872771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.872798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.872956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.872982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.873153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.873184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.873409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.873452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.873665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.873706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.873868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.873897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.874056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.874082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.874276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.874319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.874493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.874536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.874693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.874735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.874903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.874929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.875087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.875118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.875319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.875495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.875541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.875698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.875741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.875925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.875951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.876113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.876139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.876324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.876367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.876551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.876594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.876777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.876819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.876973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.876998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.257 [2024-07-25 00:14:17.877195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.257 [2024-07-25 00:14:17.877238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.257 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.877448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.877490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.877687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.877714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.877898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.877924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.878086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.878118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.878299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.878324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.878507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.878550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.878744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.878785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.878940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.878965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.879120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.879146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.879305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.879347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.879506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.879552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.879736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.879762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.879947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.879973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.880107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.880134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.880315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.880357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.880548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.880589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.880779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.880805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.880967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.880993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.881199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.881243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.881400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.881425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.881635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.881678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.881875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.881901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.882059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.882085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.882291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.882339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.882517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.882560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.882713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.882755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.882942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.882967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.883158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.883305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.883348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.883552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.883595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.883781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.883823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.884030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.884240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.884461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.884617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.258 [2024-07-25 00:14:17.884776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.258 qpair failed and we were unable to recover it. 00:34:12.258 [2024-07-25 00:14:17.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.884991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.885144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.885173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.885375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.885418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.885599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.885642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.885824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.885849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.886008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.886033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.886251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.886406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.886449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.886662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.886705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.886866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.886893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.887053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.887078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.887267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.887309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.887531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.887574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.887752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.887795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.887959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.887985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.888116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.888142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.888327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.888369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.888551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.888594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.888784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.888826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.888982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.889008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.889186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.889231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.889424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.889467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.889648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.889674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.889849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.889874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.890040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.890065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.890285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.890329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.890511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.890554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.890775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.890821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.890959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.890985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.891194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.891238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.891421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.891465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.891625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.891668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.891821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.891864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.892027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.892053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.892232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.892274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.892512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.892538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.259 qpair failed and we were unable to recover it. 00:34:12.259 [2024-07-25 00:14:17.892767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.259 [2024-07-25 00:14:17.892810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.892968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.892995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.893208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.893252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.893422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.893465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.893654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.893696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.893841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.893866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.894054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.894080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.894247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.894290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.894442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.894485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.894702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.894745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.894880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.894905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.895064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.895089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.895257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.895300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.895426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.895452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.895630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.895671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.895840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.895865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.896022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.896048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.896205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.896248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.896444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.896488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.896682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.896708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.896864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.896889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.897072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.897098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.897287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.897331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.897513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.897556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.897768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.897811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.897969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.897995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.898146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.898176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.898415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.898458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.898612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.898653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.898810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.898836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.899004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.899030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.899238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.899286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.899475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.899517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.899731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.899773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.899954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.899980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.900141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.900167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.900324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.260 [2024-07-25 00:14:17.900367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.260 qpair failed and we were unable to recover it. 00:34:12.260 [2024-07-25 00:14:17.900518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.900562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.900751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.900779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.900982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.901007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.901159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.901188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.901389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.901431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.901606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.901650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.901810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.901836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.901994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.902019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.902218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.902262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.902445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.902491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.902678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.902720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.902849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.902874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.903031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.903056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.903235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.903264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.903465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.903508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.903690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.903732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.903893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.903919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.904080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.904112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.904295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.904338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.904546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.904589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.904791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.904818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.904984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.905010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.905193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.905236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.905402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.905445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.905628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.905671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.905808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.905833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.905998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.906024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.906207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.906250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.906460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.906502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.906694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.906737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.906872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.906898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.907081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.907112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.907294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.907336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.907545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.907588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.907772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.907819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.907989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.261 [2024-07-25 00:14:17.908015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.261 qpair failed and we were unable to recover it. 00:34:12.261 [2024-07-25 00:14:17.908197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.908241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.908426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.908468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.908652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.908698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.908835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.908861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.908993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.909020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.909210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.909255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.909407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.909449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.909623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.909665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.909823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.909849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.910059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.910261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.910442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.910669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.910830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.910974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.911000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.911174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.911218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.911420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.911463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.911676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.911718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.911904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.911929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.912086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.912117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.912305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.912348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.912556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.912598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.912768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.912811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.912975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.913000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.913182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.913224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.913376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.913418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.913614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.913641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.913802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.913828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.914014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.914040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.914206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.914250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.262 [2024-07-25 00:14:17.914430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.262 [2024-07-25 00:14:17.914473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.262 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.914628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.914670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.914852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.914877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.915038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.915063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.915213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.915257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.915417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.915459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.915612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.915655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.915812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.915839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.916072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.916109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.916285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.916327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.916485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.916528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.916706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.916752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.916913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.916938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.917131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.917162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.917349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.917391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.917543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.917590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.917781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.917823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.917978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.918004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.918186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.918215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.918394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.918435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.918647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.918690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.918880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.918906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.919072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.919097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.919291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.919334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.543 qpair failed and we were unable to recover it. 00:34:12.543 [2024-07-25 00:14:17.919552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.543 [2024-07-25 00:14:17.919595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.919772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.919817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.920004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.920030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.920217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.920261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.920470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.920512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.920700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.920743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.920897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.920922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.921108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.921134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.921319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.921362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.921544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.921589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.921801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.921844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.922038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.922064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.922238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.922263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.922422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.922463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.922613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.922654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.922842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.922885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.923035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.923060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.923250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.923294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.923447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.923490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.923644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.923687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.923875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.923900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.924058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.924083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.924250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.924293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.924447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.924489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.924649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.924695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.924858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.924884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.925070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.925095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.925277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.925321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.925530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.925573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.925764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.925807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.925999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.926025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.926209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.926253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.926477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.926519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.926702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.926751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.926882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.926910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.927120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.927173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.544 [2024-07-25 00:14:17.927327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.544 [2024-07-25 00:14:17.927369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.544 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.927590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.927758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.927784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.927971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.927996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.928201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.928245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.928399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.928446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.928623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.928669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.928829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.928854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.929036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.929062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.929278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.929307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.929534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.929577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.929795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.929837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.929966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.929991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.930124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.930150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.930359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.930387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.930599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.930627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.930836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.930880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.931027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.931054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.931236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.931279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.931471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.931498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.931689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.931731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.931874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.931899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.932040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.932066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.932254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.932297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.932461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.932503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.932703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.932746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.932883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.932914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.933078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.933119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.933300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.933349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.933535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.933578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.933762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.933806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.933946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.933972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.545 qpair failed and we were unable to recover it. 00:34:12.545 [2024-07-25 00:14:17.934109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.545 [2024-07-25 00:14:17.934134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.934309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.934352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.934544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.934573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.934774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.934799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.934926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.934951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.935125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.935151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.935366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.935394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.935574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.935617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.935794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.935837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.935984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.936010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.936232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.936275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.936433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.936476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.936702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.936858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.936885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.937048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.937073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.937274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.937317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.937461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.937504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.937662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.937704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.937892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.937917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.938081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.938113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.938322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.938365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.938542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.938584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.938743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.938785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.938991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.939031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.939189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.939217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.939374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.939418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.939591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.939622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.939827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.939856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.940000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.940028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.940216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.940243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.940439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.940470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.940640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.940669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.940840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.940868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.941070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.941099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.941293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.941319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.941481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.941508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.546 [2024-07-25 00:14:17.941667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.546 [2024-07-25 00:14:17.941702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.546 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.941873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.941902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.942055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.942082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.942260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.942286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.942473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.942498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.942743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.942768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.942991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.943020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.943232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.943259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.943415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.943445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.943648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.943677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.943880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.943909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.944091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.944125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.944323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.944350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.944562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.944591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.944792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.944821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.944965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.944994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.945188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.945215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.945409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.945436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.945624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.945652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.945828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.945857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.946040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.946066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.946247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.946273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.946446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.946474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.946672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.946701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.946869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.946910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.947115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.947158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.947328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.947355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.947494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.947520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.947740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.947768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.947929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.947955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.948119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.948146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.948283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.948310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.948494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.948520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.547 [2024-07-25 00:14:17.948650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.547 [2024-07-25 00:14:17.948675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.547 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.948873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.948912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.949085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.949141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.949314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.949340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.949521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.949565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.949751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.949777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.949937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.949963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.950108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.950139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.950309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.950335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.950549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.950579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.950797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.950825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.951000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.951029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.951212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.951239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.951402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.951428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.951608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.951638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.951843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.951872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.952081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.952113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.952323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.952361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.952559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.952589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.952784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.952811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.953014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.953041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.953230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.953256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.953409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.953434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.953608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.953636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.953804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.953832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.954003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.954031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.954211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.954241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.954411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.954436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.954639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.548 [2024-07-25 00:14:17.954667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.548 qpair failed and we were unable to recover it. 00:34:12.548 [2024-07-25 00:14:17.954866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.954895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.955074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.955108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.955273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.955300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.955433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.955459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.955689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.955730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.955934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.955967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.956170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.956196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.956342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.956368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.956616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.956644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.956818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.956845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.956993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.957021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.957224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.957250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.957422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.957450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.957619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.957647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.957909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.957936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.958118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.958166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.958331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.958355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.958530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.958558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.958735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.958763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.958969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.958997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.959171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.959196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.959395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.959423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.959595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.959622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.959819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.959847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.960047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.960074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.960286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.960312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.960441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.960465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.960625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.960667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.960851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.960879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.961097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.961130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.961291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.961316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.961499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.961527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.961703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.961732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.961908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.961936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.962083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.962119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.962302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.962327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.549 [2024-07-25 00:14:17.962504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.549 [2024-07-25 00:14:17.962531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.549 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.962761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.962789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.962937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.962964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.963131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.963172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.963362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.963402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.963635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.963662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.963860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.963888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.964074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.964099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.964291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.964316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.964454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.964499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.964712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.964744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.964946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.964973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.965159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.965185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.965348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.965373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.965548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.965576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.965713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.965741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.965942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.965970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.966134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.966167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.966326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.966352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.966504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.966532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.966690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.966714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.966870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.966894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.967035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.967059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.967223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.967248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.967383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.967408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.967599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.967624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.967822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.967849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.968080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.968277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.968461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.968637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.968845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.968990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.969178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.969414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.969592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.969760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-25 00:14:17.969924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-25 00:14:17.969968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.970132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.970165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.970350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.970378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.970562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.970587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.970717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.970742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.970879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.970904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.971062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.971109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.971297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.971322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.971473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.971498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.971681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.971706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.971859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.971884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.972034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.972062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.972244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.972270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.972403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.972428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.972639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.972667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.972865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.972893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.973071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.973096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.973245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.973270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.973413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.973439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.973567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.973592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.973763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.973791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.974000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.974027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.974217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.974243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.974419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.974448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.974651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.974678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.974889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.974914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.975054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.975079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.975283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.975321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.975494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.975521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.975681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.975709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.975886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.975911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.976094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.976125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.976299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.976326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.976485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.976515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.976672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.976697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.976828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.976868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.977061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.977089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-25 00:14:17.977265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-25 00:14:17.977291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.977445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.977473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.977687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.977712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.977872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.977897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.978970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.978995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.979166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.979193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.979377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.979404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.979583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.979608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.979817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.979844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.980016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.980044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.980226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.980251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.980430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.980457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.980647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.980675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.980858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.980883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.981055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.981277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.981442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.981636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.981830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.981987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.982158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.982364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.982515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.982752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.982917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.982944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.983157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.983182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.983340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.983373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.983543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.983571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.983735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.983760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.983940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.983965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.984152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.984182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.984367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.984393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-25 00:14:17.984599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-25 00:14:17.984627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.984776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.984803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.984993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.985018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.985159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.985187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.985363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.985390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.985587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.985612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.985790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.985818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.985975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.986168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.986342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.986549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.986783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.986956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.986984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.987166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.987192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.987352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.987377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.987512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.987537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.987702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.987727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.987883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.987908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.988065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.988090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.988298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.988336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.988482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.988510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.988698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.988732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.988928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.988956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.989130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.989156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.989312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.989355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.989551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.989576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.989733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.989759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.989961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.989989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.990137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.990167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.990337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.990363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.990569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-25 00:14:17.990596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-25 00:14:17.990798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.990826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.991004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.991029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.991171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.991198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.991403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.991431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.991614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.991640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.991847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.991875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.992085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.992282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.992454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.992679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.992837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.992994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.993019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.993199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.993224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.993416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.993441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.993648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.993675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.993818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.993845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.994044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.994072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.994281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.994322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.994535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.994578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.994785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.994812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.994992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.995020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.995201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.995229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.995410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.995436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.995617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.995645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.995935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.995987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.996174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.996200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.996361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.996386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.996589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.996617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.996797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.996822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.996974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.997001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.997190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.997218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.997379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.997404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.997607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.997635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.997932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.997990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.998176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.998202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.998382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-25 00:14:17.998409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-25 00:14:17.998711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.998771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:17.998953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.998979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:17.999186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.999214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:17.999390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.999417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:17.999577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.999603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:17.999820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:17.999876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.000055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.000080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.000249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.000276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.000438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.000471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.000739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.000789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.000997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.001023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.001166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.001193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.001354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.001397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.001601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.001626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.001826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.001854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.002049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.002077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.002244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.002270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.002403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.002444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.002716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.002741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.002874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.002899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.003057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.003098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.003281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.003307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.003498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.003523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.003675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.003703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.003981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.004033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.004236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.004261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.004409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.004437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.004690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.004719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.004882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.004908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.005093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.005133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.005270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.005295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.005425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.005450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.005650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.005678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.005852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.005881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.006033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.006058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.006224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.006250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.006426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-25 00:14:18.006454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-25 00:14:18.006604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.006629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.006791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.006816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.006949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.006974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.007131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.007157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.007388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.007531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.007558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.007732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.007757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.007963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.007991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.008193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.008222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.008382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.008406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.008565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.008607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.008756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.008784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.008991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.009190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.009419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.009651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.009815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.009969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.009994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.010201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.010226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.010362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.010404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.010588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.010612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.010770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.010795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.010950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.010975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.011162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.011190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.011369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.011394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.011558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.011583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.011742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.011767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.011920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.011945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.012153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.012182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.012329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.012356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.012540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.012565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.012738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.012766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.012933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.012961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.013111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.013136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.013349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.013377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.013566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.013591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.013763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-25 00:14:18.013788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-25 00:14:18.013964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.013992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.014205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.014231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.014366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.014395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.014554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.014579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.014715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.014740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.014899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.014924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.015071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.015098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.015265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.015290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.015449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.015474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.015667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.015695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.015892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.015919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.016092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.016123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.016328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.016356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.016554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.016581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.016756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.016780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.016905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.016948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.017174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.017201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.017362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.017387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.017544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.017569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.017785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.017812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.017991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.018016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.018219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.018248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.018420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.018447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.018596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.018621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.018805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.018832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.019015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.019042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.019191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.019216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.019373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.019415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.019610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.019638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.019795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.019822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.020031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.020059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.020241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.020269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.020440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.020465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.020642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-25 00:14:18.020669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-25 00:14:18.020808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.020837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.021898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.021925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.022124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.022150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.022412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.022595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.022626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.022782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.022807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.022944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.022970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.023171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.023348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.023373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.023551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.023578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.023743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.023770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.024014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.024042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.024258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.024284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.024444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.024469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.024628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.024653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.024809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.024849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.025036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.025060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.025237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.025263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.025441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.025469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.025642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.025669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.025872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.025897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.026059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.026084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.026266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.026294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.026467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.026492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.026667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.026695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.026861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.026889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.027071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.027097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.027284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.027312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.027510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.027538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-25 00:14:18.027739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-25 00:14:18.027764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.027938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.027966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.028126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.028157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.028339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.028364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.028521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.028549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.028749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.028774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.028928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.028953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.029157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.029185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.029384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.029412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.029585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.029610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.029784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.029812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.029973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.029998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.030137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.030163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.030322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.030349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.030526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.030553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.030753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.030778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.030966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.030996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.031198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.031227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.031396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.031421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.031572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.031600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.031779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.031806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.031954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.031979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.032139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.032181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.032366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.032391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.032549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.032575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.032787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.032815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.032992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.033020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.033195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.033220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.033415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.033442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.033589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.033618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.033802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.033827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.034008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.034036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.034220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.034246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.034432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.034456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.034613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.034640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.034837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.034865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.035039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.035064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-25 00:14:18.035255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-25 00:14:18.035280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.035421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.035446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.035605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.035630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.035786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.035811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.035965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.035990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.036198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.036224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.036397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.036432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.036616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.036641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.036802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.036827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.036959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.036984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.037189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.037217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.037374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.037400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.037578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.037620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.037792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.037820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.038018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.038043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.038214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.038239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.038442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.038470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.038644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.038670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.038848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.038877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.039029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.039058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.039261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.039286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.039486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.039514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.039688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.039716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.039888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.039912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.040090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.040125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.040305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.040330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.040518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.040543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.040723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.040901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.040929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.041127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.041153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.041303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.041331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.041508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.041535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.041683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.041708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.041909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.041941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.042144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.042170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.042323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.042348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.042522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.042550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-25 00:14:18.042725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-25 00:14:18.042753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.042954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.042981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.043167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.043193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.043317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.043344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.043480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.043505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.043635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.043660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.043854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.043882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.044072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.044097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.044280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.044308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.044488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.044513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.044703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.044729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.044887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.044912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.045123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.045152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.045335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.045361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.045503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.045529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.045732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.045760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.045939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.045965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.046146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.046175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.046370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.046399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.046568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.046593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.046768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.046796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.046939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.046966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.047142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.047167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.047317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.047360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.047547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.047572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.047755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.047780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.047960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.047988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.048163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.048188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.048351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.048376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.048572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.048599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.048741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.048768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.048998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.049026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.049207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.049233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.049420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.049581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.049606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.049752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.049780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.049976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.050004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-25 00:14:18.050180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-25 00:14:18.050209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.050357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.050385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.050565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.050593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.050774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.050798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.051009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.051038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.051219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.051247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.051420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.051445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.051638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.051665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.051834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.051862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.052017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.052044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.052235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.052261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.052458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.052486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.052646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.052672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.052863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.052888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.053055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.053083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.053270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.053295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.053475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.053504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.053702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.053730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.053905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.053930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.054117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.054150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.054315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.054340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.054497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.054522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.054705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.054732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.054887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.054914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.055106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.055132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.055316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.055341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.055477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.055518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.055677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.055706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.055842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.055868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.056044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.056072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.056283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.056309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.056451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-25 00:14:18.056476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-25 00:14:18.056653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.056680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.056828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.056853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.057096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.057276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.057459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.057622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.057822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.057999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.058210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.058401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.058550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.058742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.058920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.058948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.059121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.059147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.059321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.059349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.059525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.059553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.059704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.059729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.059858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.059883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.060083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.060118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.060310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.060335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.060484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.060512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.060685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.060711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.060893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.060918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.061131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.061165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.061366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.061396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.061568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.061593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.061776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.061804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.061965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.061993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.062197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.062223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.062383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.062408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.062561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.062602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.062771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.062796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.062928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.062954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.063169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.063197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.063340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.063365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.063508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.063533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.063697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-25 00:14:18.063726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-25 00:14:18.063883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.063908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.064055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.064082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.064264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.064290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.064448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.064473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.064611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.064635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.065136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.065179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.065347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.065374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.065575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.065601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.065752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.065794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.066908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.066934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.067059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.067084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.067295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.067320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.067529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.067557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.067729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.067755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.067897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.067923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.068085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.068135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.068318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.068344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.068512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.068537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.068670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.068696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.068846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.068871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.069058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.069085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.069276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.069301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.069461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.069486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.069667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.069694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.069875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.069903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.070081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.070147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.070287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.070477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.070502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.070662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.070687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.070836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.070864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.071043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.071068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.071225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-25 00:14:18.071251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-25 00:14:18.071428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.071456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.071656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.071684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.071862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.071887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.072096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.072132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.072279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.072306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.072488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.072513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.072690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.072718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.072887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.072914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.073124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.073150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.073323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.073351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.073482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.073509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.073678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.073703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.073830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.073872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.074065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.074093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.074252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.074277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.074486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.074514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.074690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.074718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.074898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.074923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.075099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.075130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.075280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.075305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.075501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.075527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.075680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.075706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.075891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.075916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.076050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.076075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.076249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.076274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.076453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.076478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.076657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.076682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.076863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.076888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.077913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.077940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.078155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.078319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.078345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-25 00:14:18.078474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-25 00:14:18.078499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.078692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.078718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.078850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.078875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.079032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.079059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.079237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.079263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.079454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.079479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.079620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.079645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.079833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.079858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.080028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.080053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.080214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.080239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.080418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.080443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.080607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.080633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.080813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.080838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.081968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.081993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.082173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.082211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.082362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.082390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.082524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.082550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.082719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.082744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.082902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.082930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.083118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.083160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.083353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.083384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.083580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.083607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.083774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.083802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.084044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.084072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.084295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.084320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.084461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.084486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.084668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.084697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.084870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.084898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.085039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.085068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.085330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.085356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.085543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.085569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.085727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.085752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-25 00:14:18.085931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-25 00:14:18.085959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.086160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.086186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.086319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.086346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.086548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.086576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.086749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.086777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.086921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.086949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.087127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.087163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.087299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.087324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.087501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.087526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.087735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.087763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.087935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.087963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.088131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.088173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.088338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.088364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.088525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.088551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.088686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.088712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.088867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.088895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.089074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.089110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.089283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.089308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.089471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.089499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.089675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.089703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.089860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.089889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.090072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.090097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.090260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.090285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.090445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.090471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.090658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.090686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.090939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.090972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.091179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.091206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.091362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.091387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.091543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.091572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.091779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.091820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.092063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.092088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-25 00:14:18.092236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-25 00:14:18.092262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.092448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.092475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.092716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.092744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.092949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.092977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.093128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.093170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.093326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.093351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.093538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.093563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.093716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.093757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.093959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.093987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.094170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.094196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.094387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.094412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.094640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.094668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.094924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.094951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.095139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.095165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.095301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.095325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.095512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.095537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.095670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.095695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.095873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.095901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.096113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.096164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.096307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.096332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.096506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.096538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.096744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.096777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.096978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.097184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.097346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.097531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.097717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.097947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.097975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.098126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.098169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.098331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.098360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.098535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.098563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.098729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.098757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.098920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.098948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.099142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.099189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.099360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.099386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.099571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.099599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.099770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.099798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.100003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.568 [2024-07-25 00:14:18.100028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.568 qpair failed and we were unable to recover it. 00:34:12.568 [2024-07-25 00:14:18.100215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.100242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.100436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.100461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.100705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.100734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.100940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.100968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.101180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.101205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.101362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.101387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.101545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.101569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.101696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.101721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.101903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.101928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.102090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.102121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.102288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.102313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.102479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.102504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.102684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.102711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.102883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.102911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.103087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.103122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.103320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.103345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.103488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.103528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.103709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.103736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.103905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.103933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.104074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.104110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.104321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.104345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.104525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.104553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.104757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.104785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.104921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.104948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.105140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.105177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.105318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.105344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.105531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.105556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.105800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.105828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.106962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.106990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.107193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.107219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.107402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.107428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.107552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.569 [2024-07-25 00:14:18.107576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.569 qpair failed and we were unable to recover it. 00:34:12.569 [2024-07-25 00:14:18.107762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.107787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.107923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.107948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.108113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.108139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.108307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.108333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.108501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.108526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.108714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.108743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.108914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.108942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.109125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.109168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.109335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.109360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.109521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.109545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.109716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.109741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.109917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.109945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.110169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.110195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.110354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.110379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.110536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.110561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.110705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.110731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.110901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.110943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.111129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.111164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.111363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.111388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.111570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.111595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.111812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.111852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.112027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.112054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.112269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.112301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.112463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.112488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.112650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.112675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.112831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.112856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.113935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.113960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.114153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.114179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.114348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.114373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.114554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.114579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.114761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.114786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.570 [2024-07-25 00:14:18.114950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.570 [2024-07-25 00:14:18.114975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.570 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.115159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.115185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.115319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.115344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.115511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.115537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.115691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.115716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.115879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.115904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.116098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.116129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.116298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.116323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.116512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.116537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.116723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.116748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.116882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.116908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.117098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.117337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.117494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.117705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.117858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.117995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.118194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.118348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.118559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.118756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.118948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.118973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.119155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.119181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.119352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.119378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.119543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.119570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.119742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.119767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.119953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.119978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.120141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.120167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.120301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.120326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.120503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.120528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.120693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.120720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.120905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.120931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.121083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.121124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.121295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.121320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.121500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.121525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.121653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.121678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.121854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.571 [2024-07-25 00:14:18.121879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.571 qpair failed and we were unable to recover it. 00:34:12.571 [2024-07-25 00:14:18.122005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.122030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.122214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.122241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.122427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.122452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.122636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.122662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.122796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.122820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.123039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.123227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.123436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.123645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.123826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.123992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.124153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.124336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.124540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.124731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.124919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.124944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.125100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.125145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.125308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.125334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.125492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.125517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.125676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.125701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.125838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.125863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.126075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.126283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.126478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.126661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.126835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.126987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.127014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.127216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.127243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.127386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.127411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.127573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.127598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.127756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.572 [2024-07-25 00:14:18.127781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.572 qpair failed and we were unable to recover it. 00:34:12.572 [2024-07-25 00:14:18.127966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.127991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.128153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.128178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.128308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.128332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.128463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.128488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.128652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.128677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.128889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.129949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.129974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.130124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.130158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.130337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.130362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.130491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.130516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.130680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.130705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.130837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.130862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.131889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.131914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.132066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.132091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.132263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.132288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.132461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.132485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.132672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.132697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.132850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.132875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.133907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.133933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.134095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.134127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.134283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.134308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.134461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.573 [2024-07-25 00:14:18.134486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.573 qpair failed and we were unable to recover it. 00:34:12.573 [2024-07-25 00:14:18.134621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.134647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.134805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.134830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.134968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.134994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.135153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.135179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.135319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.135345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.135505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.135531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.135664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.135690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.135848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.135872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.136053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.136078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.136243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.136269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.136426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.136451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.136590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.136616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.136805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.136830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.137036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.137258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.137418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.137601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.137790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.137978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.138149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.138334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.138519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.138703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.138885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.138914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.139044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.139069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.139292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.139318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.139453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.139478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.139637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.139662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.139845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.139870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.140959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.140985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.141174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.141199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.141401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.574 [2024-07-25 00:14:18.141427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.574 qpair failed and we were unable to recover it. 00:34:12.574 [2024-07-25 00:14:18.141563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.141590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.141748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.141773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.141936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.141961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.142138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.142163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.142329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.142354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.142512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.142537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.142700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.142725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.142896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.142921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.143082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.143281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.143490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.143681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.143836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.143994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.144183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.144393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.144576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.144762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.144928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.144954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.145137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.145302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.145478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.145657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.145812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.145978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.146169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.146387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.146597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.146783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.146964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.146988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.147176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.147202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.147372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.147397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.147560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.147584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.147741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.147766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.147896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.147920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.148055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.148080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.148269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.575 [2024-07-25 00:14:18.148308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.575 qpair failed and we were unable to recover it. 00:34:12.575 [2024-07-25 00:14:18.148502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.148530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.148692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.148718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.148849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.148875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.149030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.149056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.149240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.149267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.149431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.149458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.149621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.149646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.149834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.149860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.150920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.150945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.151109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.151135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.151318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.151344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.151511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.151536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.151696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.151721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.151853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.151880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.152968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.152993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.153124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.153150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.153286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.153311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.153491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.153516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.153702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.153727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.153910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.153935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.154934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.154958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.155100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.576 [2024-07-25 00:14:18.155131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.576 qpair failed and we were unable to recover it. 00:34:12.576 [2024-07-25 00:14:18.155353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.155394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.155534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.155559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.155745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.155769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.155912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.155937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.156122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.156148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.156327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.156352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.156524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.156549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.156738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.156767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.156903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.156928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.157091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.157140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.157278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.157304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.157441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.157467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.157663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.157688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.157840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.157865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.158001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.158027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.158189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.158222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.158399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.158439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.158632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.158660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.158809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.158837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.159085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.159120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.159258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.159299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.159475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.159501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.159667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.159693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.159875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.159900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.160110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.160145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.160331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.160357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.160544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.160570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.160725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.160750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.160886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.160911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.161041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.161066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.161238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.577 [2024-07-25 00:14:18.161264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.577 qpair failed and we were unable to recover it. 00:34:12.577 [2024-07-25 00:14:18.161431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.161456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.161613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.161639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.161774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.161799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.161989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.162176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.162340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.162547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.162700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.162888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.162913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.163094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.163125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.163260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.163286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.163483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.163508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.163646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.163671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.163854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.163879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.164066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.164267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.164477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.164672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.164858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.164996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.165165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.165330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.165572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.165729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.165925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.165950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.166088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.166133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.166302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.166328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.166505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.166531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.166664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.166689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.166847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.166872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.167057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.167230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.167438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.167624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.167846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.167988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.168012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.168190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.578 [2024-07-25 00:14:18.168229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.578 qpair failed and we were unable to recover it. 00:34:12.578 [2024-07-25 00:14:18.168372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.168398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.168537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.168563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.168718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.168744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.168901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.168927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.169067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.169096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.169338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.169365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.169525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.169551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.169695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.169721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.169874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.169901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.170083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.170116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.170308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.170334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.170516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.170542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.170727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.170753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.170909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.170935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.171093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.171126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.171322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.171346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.171506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.171531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.171676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.171701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.171885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.171910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.172100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.172308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.172498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.172685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.172835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.172992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.173017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.173208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.173234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.173394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.173418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.173590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.173615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.173857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.173882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.174928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.174953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.175113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.175138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.175304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.579 [2024-07-25 00:14:18.175330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.579 qpair failed and we were unable to recover it. 00:34:12.579 [2024-07-25 00:14:18.175488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.175512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.175666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.175692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.175848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.175874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.176134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.176160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.176309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.176335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.176490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.176515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.176675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.176700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.176853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.176878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.177972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.177997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.178138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.178164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.178311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.178336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.178480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.178505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.178669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.178694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.178880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.178905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.179057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.179081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.179268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.179293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.179457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.179482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.179639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.179663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.179848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.179873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.180945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.180971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.181165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.181191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.181331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.181356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.181493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.181519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.181681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.181705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.181935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.181960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.182121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.182147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.182310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.580 [2024-07-25 00:14:18.182335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.580 qpair failed and we were unable to recover it. 00:34:12.580 [2024-07-25 00:14:18.182469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.182494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.182626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.182653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.182793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.182818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.182969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.182994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.183121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.183146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.183274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.183299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.183455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.183480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.183663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.183688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.183829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.183854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.184929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.184958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.185152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.185192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.185360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.185387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.185618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.185644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.185769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.185795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.185948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.185989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.186121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.186146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.186359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.186385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.186541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.186567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.186723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.186748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.186909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.186936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.187092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.187126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.187269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.187295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.187455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.187481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.187640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.187666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.187880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.187905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.188090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.188128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.581 qpair failed and we were unable to recover it. 00:34:12.581 [2024-07-25 00:14:18.188291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.581 [2024-07-25 00:14:18.188317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.188477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.188503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.188629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.188655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.188823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.188848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.188976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.189163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.189322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.189502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.189681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.189883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.189908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.190070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.190096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.190263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.190289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.190470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.190494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.190652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.190676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.190854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.190879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.191058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.191083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.191254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.191278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.191460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.191484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.191641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.191666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.191822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.191848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.192909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.192933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.193083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.193113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.193279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.193304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.193460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.193484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.193650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.193674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.193809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.193834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-25 00:14:18.194940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.582 [2024-07-25 00:14:18.194964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.195128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.195160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.195323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.195348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.195505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.195529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.195687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.195713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.195868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.195893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.196076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.196107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.196274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.196298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.196455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.196479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.196634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.196658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.196920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.196945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.197083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.197124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.197306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.197331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.197495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.197519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.197683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.197707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.197876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.197901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.198087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.198118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.198282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.198306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.198489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.198514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.198679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.198704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.198868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.198892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.199957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.199982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.200114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.200139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.200298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.200324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.200509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.200534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.200690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.200715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.200878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.200902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-25 00:14:18.201963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.583 [2024-07-25 00:14:18.201988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.202150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.202301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.202488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.202652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.202819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.202979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.203163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.203354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.203513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.203694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.203901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.203926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.204088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.204119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.204272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.204296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.204451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.204475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.204659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.204684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.204844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.204869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.205975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.205999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.206124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.206150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.206290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.206316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.206501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.206525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.206685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.206710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.206868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.206894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.207120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.207282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.207450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.207610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.207794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.207979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.208005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.208166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.208191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.208371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.208396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.208567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.208591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-25 00:14:18.208814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.584 [2024-07-25 00:14:18.208839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.209972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.209997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.210132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.210158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.210292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.210318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.210465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.210489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.210644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.210669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.210853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.210878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.211909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.211934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.212092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.212130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.212289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.212314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.212476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.212500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.212669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.212694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.212875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.212899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.213059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.213084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.213272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.213297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.213430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.213454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.213642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.213667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.213804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.213828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.214954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.214979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.215158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.215183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.215319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.215343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.215483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.215513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.215693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.215718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.215850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.215874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.216013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.585 [2024-07-25 00:14:18.216038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.585 qpair failed and we were unable to recover it. 00:34:12.585 [2024-07-25 00:14:18.216171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.216196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.216350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.216375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.216506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.216533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.216694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.216718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.216911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.216935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.217122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.217147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.217308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.217334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.217493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.217518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.217651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.217675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.217811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.217836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.218926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.218950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.219134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.219160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.219296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.219321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.219498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.219522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.219682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.219706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.219840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.219865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.220934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.220959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.221097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.221128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.221288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.221312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.221466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.221491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.221646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.221671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.221829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.221853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.222011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.222036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.222190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.222216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.586 qpair failed and we were unable to recover it. 00:34:12.586 [2024-07-25 00:14:18.222370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.586 [2024-07-25 00:14:18.222395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.222581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.222606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.222761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.222785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.222956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.222981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.223141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.223166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.223321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.223346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.223509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.223534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.223671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.223696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.223846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.223871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.224929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.224954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.225148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.225174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.225339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.225364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.225516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.225541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.225728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.225753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.225912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.225937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.226127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.226291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.226495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.226675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.226837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.226994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.227165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.227346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.227528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.227686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.227880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.227909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.228068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.228093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.228229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.228255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.228451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.228476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.228635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.228660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.228843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.228868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.587 [2024-07-25 00:14:18.229848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.587 [2024-07-25 00:14:18.229872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.587 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.230057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.230081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.230222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.230247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.230438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.230463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.230624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.230649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.230833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.230858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.231919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.231944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.232097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.232128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.232281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.232306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.232436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.232461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.232647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.232672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.232857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.232888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.233060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.233097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.233270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.233302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.233466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.233492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.233639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.233664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.233819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.233843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.234865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.234891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.235063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.235111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.235301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.235328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.235506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.235533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.235716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.235742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.235898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.235923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.236083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.236113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.236255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.236282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.236450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.236486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.588 [2024-07-25 00:14:18.236640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.588 [2024-07-25 00:14:18.236666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.588 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.236826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.236853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.236982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.237167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.237354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.237508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.237720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.237932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.237979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.238161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.238199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.238404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.238437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.238584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.238618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.238776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.238809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.238961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.238994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.239180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.239217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.239395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.239423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.239586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.239613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.239788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.239823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.240956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.240981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.241147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.241172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.241310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.241335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.241499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.241524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.241677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.241702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.241861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.241886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.242926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.242952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.243121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.243156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-25 00:14:18.243345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-25 00:14:18.243370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.243534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.243558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.243718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.243743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.243927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.243953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.244113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.244270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.244431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.244649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.244826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.244996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.245156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.245350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.245511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.245696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.245902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.245927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.246111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.246137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.246319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.246344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.246503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.246528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.246692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.246732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.246896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.246921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.247938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.247963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.248106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.248132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.248281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.248307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.248482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.248507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.248680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.248706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.248844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.248869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.249077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.249267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.249453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.249637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.249803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-25 00:14:18.249997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-25 00:14:18.250022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.250174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.250200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.250355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.250380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.250538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.250567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.250727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.250752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.250903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.250944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.251113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.251139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.251273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.251298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.251476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.251502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.251645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.251670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.251829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.251856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.252921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.252946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.253134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.253160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.253299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.253334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.253527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.253552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.253680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.253704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.253889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.253914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.254111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.254323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.254489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.254659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.254820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.254978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.255180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.255551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.255771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.255961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.255986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.256114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.256140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.256302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.256327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.256522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.256547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-25 00:14:18.256688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-25 00:14:18.256712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.256852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.256878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.257903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.257928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.258897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.258922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.259952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.259977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.260137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.260164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.260301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.260326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.260485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.260671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.260696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.260859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.260884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.261938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.261964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.262148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.262174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.262334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.262359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.262512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.262537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.262667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.262697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.262838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.262863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.263019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-25 00:14:18.263044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-25 00:14:18.263202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.263228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.263387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.263412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.263548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.263573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.263730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.263756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.263885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.263910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.264093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.264294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.264472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.264683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.264835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.264991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.265182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.265367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.265546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.265714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.265895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.265920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.266077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.266240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.266451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.266639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.266843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.266975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.267160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.267344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.267507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.267691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.267854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.267894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.268062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.268087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-25 00:14:18.268249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-25 00:14:18.268275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.268408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.268433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.268564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.268589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.268748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.268773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.268931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.268956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.269145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.269171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.269330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.269356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.269517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.269542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.269730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.269755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.269934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.269963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.270126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.270152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.270338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.270363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.270518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.270543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.270678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.270703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.270863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.270889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.271071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.271290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.271450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.271667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.271859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.271987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.272150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.272317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.272502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.272687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.272841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.272866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.273934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.273960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.274125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.274151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.274292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.274317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.274478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.274503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.274658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.274682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.274870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-25 00:14:18.274896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-25 00:14:18.275059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.275084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.275246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.275271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.275455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.275480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.275633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.275658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.275810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.275836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.275991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.276151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.276333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.276490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.276859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.276885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.277951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.277976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.278138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.278326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.278492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.278674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.278857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.278989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.279177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.279340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.279528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.279695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.279891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.279916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.280075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.280264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.280469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.280678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.280830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.280983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.281008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.281166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.281192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.281325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.281351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-25 00:14:18.281508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-25 00:14:18.281534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.281696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.281721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.281845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.281869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.282928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.282953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.283113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.283139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.283315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.283341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.283525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.283551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.283709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.283734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.283862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.283887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.284942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.284967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.285960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.285984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.286112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.286137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.286318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.286343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.286473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.286498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.286660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.286685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.286844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.286869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.287920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-25 00:14:18.287946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-25 00:14:18.288112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.288137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.288289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.288314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.288472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.288496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.288682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.288707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.288887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.288912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.289961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.289986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.290141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.290168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.290329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.290355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.290485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.290510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.290665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.290690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.290885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.290909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.291846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.291871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.292904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.292929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.293109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.293135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.293290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.293315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.293497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.293522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.293655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.293681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.293835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.293861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.294019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.294044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.294210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.294236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.294394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-25 00:14:18.294419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-25 00:14:18.294548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.294573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.294727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.294752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.294905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.294930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.295113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.295139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.295295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.295320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.295483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.295507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.295663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.295689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.295831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.295856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.296971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.296996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.297152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.297177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.297333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.297358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.297539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.297564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.297719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.297744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.297873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.297898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.298973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.298998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.299184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.299209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.299365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.299390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.299525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.299549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.299682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.299707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.299865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.299891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.300022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-25 00:14:18.300047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-25 00:14:18.300200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.300226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.300362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.300387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.300540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.300566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.300702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.300729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.300895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.300920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.301925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.301950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.302970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.302995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.303130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.303156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.303337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.303362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.303493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.303518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.303675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.303700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.303864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.303888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.304077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.304265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.304420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.304625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.304828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.304985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.305149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.305311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.305470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.305652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.305819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.305856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.306028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.306056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.306203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.306229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.306356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.306382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-25 00:14:18.306571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-25 00:14:18.306598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.306769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.306804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.306942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.306968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.307160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.307195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.307367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.307394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.307551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.307578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.307706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.307732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.307892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.307917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.308076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.308115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.308304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.308329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.308474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.308500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.308659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.308684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.308844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.308869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.309031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.309244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.309437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.309647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.309830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.309990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.310173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.310331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.310513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.310697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.310886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.310911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.311073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.311098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.311263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.311289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.311450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.311476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.311658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.311684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.311841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.311866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.312902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.312927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.313060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.313086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.313247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-25 00:14:18.313279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-25 00:14:18.313466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.313491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.313646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.313672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.313855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.313880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.314904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.314930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.315090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.315316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.315467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.315661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.315825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.315983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.316192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.316378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.316574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.316760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.316945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.316970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.317135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.317161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.317346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.317371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.317506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.317530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.317689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.317715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.317876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.317902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.318089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.318120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.318286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.318311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.318473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.318498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.318631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.318656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.318848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.318873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.319967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.319992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.892 qpair failed and we were unable to recover it. 00:34:12.892 [2024-07-25 00:14:18.320140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.892 [2024-07-25 00:14:18.320167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.320303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.320328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.320485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.320512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.320702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.320731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.320885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.320910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.321972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.321997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.322180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.322206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.322344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.322369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.322531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.322558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.322749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.322774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.322928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.322953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.323110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.323303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.323462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.323644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.323830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.323989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.324204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.324363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.324548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.324766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.324951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.324976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.325132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.325158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.325315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.325341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.325474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.325498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.325640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.325665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.325828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.325853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.326013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.326037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.326159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.326185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.326345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.326370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.326523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.326550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.893 qpair failed and we were unable to recover it. 00:34:12.893 [2024-07-25 00:14:18.326684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.893 [2024-07-25 00:14:18.326710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.326872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.326896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.327949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.327979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.328134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.328160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.328323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.328349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.328529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.328554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.328682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.328708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.328863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.328888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.329095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.329310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.329462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.329645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.329830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.329986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.330169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.330353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.330562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.330743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.330903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.330928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.331083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.331114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.331297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.331322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.331496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.331658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.331683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.331833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.331858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.332021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.332046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.332200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.332227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.894 qpair failed and we were unable to recover it. 00:34:12.894 [2024-07-25 00:14:18.332386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.894 [2024-07-25 00:14:18.332411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.332543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.332568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.332734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.332760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.332924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.332949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.333941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.333966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.334129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.334154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.334334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.334359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.334518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.334543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.334728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.334753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.334886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.334911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.335944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.335969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.336153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.336179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.336315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.336339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.336493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.336518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.336676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.336703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.336859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.336884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.337035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.337242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.337430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.337614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.337826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.337975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.338133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.338315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.338504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.338685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.338841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-25 00:14:18.338866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.895 qpair failed and we were unable to recover it. 00:34:12.895 [2024-07-25 00:14:18.339050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.339075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.339215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.339242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.339404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.339429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.339611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.339636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.339768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.339793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.339980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.340163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.340347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.340527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.340714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.340928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.340953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.341136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.341162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.341324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.341350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.341481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.341506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.341690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.341715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.341841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.341866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.342927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.342952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.343112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.343137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.343305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.343331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.343482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.343507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.343660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.343685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.343847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.343872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.344922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.344947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.345114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.345140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.345280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.345305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.345489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.345514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.896 qpair failed and we were unable to recover it. 00:34:12.896 [2024-07-25 00:14:18.345637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.896 [2024-07-25 00:14:18.345662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.345822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.345847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.346832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.346858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.347044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.347251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.347417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.347642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.347828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.347993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.348149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.348306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.348485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.348640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.348827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.348854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.349066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.349259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.349449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.349635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.349815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.349984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.350139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.350344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.350537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.350695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.350883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.350908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.351938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.351964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.897 [2024-07-25 00:14:18.352121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.897 [2024-07-25 00:14:18.352147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.897 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.352278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.352303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.352462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.352488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.352643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.352668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.352822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.352847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.352977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.353139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.353318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.353522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.353705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.353883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.353909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.354902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.354927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.355115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.355140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.355278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.355303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.355428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.355453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.355610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.355636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.355821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.355846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.356032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.356057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.356254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.356281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.356469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.356494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.356675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.356700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.356857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.356883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.357958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.357984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.358146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.358172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.358301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.358328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.358485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.358511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.358671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.358696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.898 qpair failed and we were unable to recover it. 00:34:12.898 [2024-07-25 00:14:18.358846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.898 [2024-07-25 00:14:18.358871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.358993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.359017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.359174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.359200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.359360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.359385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.359640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.359665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.359820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.359845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.360919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.360945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.361098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.361129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.361269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.361294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.361449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.361474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.361629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.361656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.361811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.361852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.362966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.362991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.363148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.363173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.363308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.363333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.363460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.363485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.363644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.363669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.363822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.363847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.364006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.364031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.364218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.364244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.364403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.899 [2024-07-25 00:14:18.364429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.899 qpair failed and we were unable to recover it. 00:34:12.899 [2024-07-25 00:14:18.364619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.364644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.364805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.364830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.364993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.365147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.365302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.365489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.365651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.365833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.365858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.366943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.366969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.367113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.367139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.367316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.367341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.367468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.367493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.367651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.367676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.367852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.367877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.368935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.368960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.369142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.369167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.369298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.369327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.369489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.369514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.369696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.369721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.369901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.369926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.370078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.370275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.370457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.370642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.370827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.370987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.371013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.900 [2024-07-25 00:14:18.371170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.900 [2024-07-25 00:14:18.371196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.900 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.371379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.371404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.371537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.371563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.371748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.371773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.371911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.371936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.372955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.372980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.373136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.373162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.373346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.373371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.373528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.373553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.373715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.373741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.373874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.373900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.374957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.374982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.375163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.375189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.375375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.375400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.375580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.375605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.375762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.375786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.375920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.375945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.376112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.376138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.376295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.376320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.376505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.376530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.376660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.376690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.376824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.376850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.377060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.377271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.377449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.377627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.901 [2024-07-25 00:14:18.377829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.901 qpair failed and we were unable to recover it. 00:34:12.901 [2024-07-25 00:14:18.377978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.378848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.378995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.379182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.379340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.379528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.379685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.379867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.379892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.380070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.380291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.380471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.380649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.380835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.380993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.381144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.381299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.381488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.381639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.381847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.381872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.382893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.382917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.383065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.383090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.383243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.383268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.383431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.383456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.383610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.383635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.383816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.383845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.384009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.384034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.384193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.384225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.902 [2024-07-25 00:14:18.384382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.902 [2024-07-25 00:14:18.384408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.902 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.384564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.384590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.384746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.384771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.384955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.384980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.385166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.385316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.385469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.385625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.385834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.385993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.386178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.386370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.386542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.386699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.386879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.386905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.387040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.387064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.387257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.387283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.387468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.387493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.387645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.387670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.387854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.387878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.388900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.388926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.389114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.389271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.389446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.389626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.389830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.389992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.390018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.390181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.390207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.903 [2024-07-25 00:14:18.390337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.903 [2024-07-25 00:14:18.390363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.903 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.390495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.390521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.390675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.390701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.390834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.390859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.390990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.391213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.391376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.391535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.391746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.391953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.391978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.392143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.392169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.392310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.392336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.392495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.392521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.392675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.392700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.392857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.392883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.393907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.393933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.394079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.394112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.394268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.394293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.394428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.394454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.394635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.394660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.394793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.394819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.395889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.395915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.396075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.396108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.396272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.396298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.904 [2024-07-25 00:14:18.396426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.904 [2024-07-25 00:14:18.396452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.904 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.396615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.396640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.396790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.396815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.396966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.396990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.397125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.397151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.397282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.397309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.397498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.397529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 943841 Killed "${NVMF_APP[@]}" "$@" 00:34:12.905 [2024-07-25 00:14:18.397649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.397674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.397857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.397881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:12.905 [2024-07-25 00:14:18.398019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.398050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:12.905 [2024-07-25 00:14:18.398180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.398206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:12.905 [2024-07-25 00:14:18.398362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.398387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:12.905 [2024-07-25 00:14:18.398521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 [2024-07-25 00:14:18.398546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.398704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.398729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.398886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.398911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.399999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.400142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.400172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.400334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.400359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.400489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.400515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.400654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.400681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.400840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.400865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.401883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.401908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 [2024-07-25 00:14:18.402040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 [2024-07-25 00:14:18.402066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=944393 00:34:12.905 [2024-07-25 00:14:18.402230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.905 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:12.905 [2024-07-25 00:14:18.402256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.905 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 944393 00:34:12.906 [2024-07-25 00:14:18.402386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.402412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 944393 ']' 00:34:12.906 [2024-07-25 00:14:18.402572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.402599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.906 [2024-07-25 00:14:18.402739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.402763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.906 [2024-07-25 00:14:18.402944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.402970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:12.906 [2024-07-25 00:14:18.403090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.403124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.906 [2024-07-25 00:14:18.403259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.403284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.403443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.403469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.403649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.403675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.403808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.403834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.403990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.404174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.404362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.404557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.404753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.404943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.404969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.405155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.405338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.405491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.405640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.405849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.405975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.406164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.406327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.406513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.406704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.406859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.406884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.407972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.407997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.408151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.906 [2024-07-25 00:14:18.408177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.906 qpair failed and we were unable to recover it. 00:34:12.906 [2024-07-25 00:14:18.408358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.408383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.408542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.408568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.408759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.408784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.408973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.408998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.409158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.409184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.409322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.409348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.409533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.409559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.409717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.409742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.409881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.409907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.410096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.410299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.410452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.410606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.410818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.410999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.411181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.411367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.411532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.411685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.411894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.411919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.412111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.412270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.412433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.412641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.412829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.412982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.413201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.413359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.413571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.413752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.413967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.413993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.414176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.414202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.414338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.414364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.414495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.414521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.414649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.414674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.414832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.414857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.907 qpair failed and we were unable to recover it. 00:34:12.907 [2024-07-25 00:14:18.415020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.907 [2024-07-25 00:14:18.415045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.415206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.415232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.415394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.415418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.415576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.415601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.415763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.415788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.415923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.415948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.416113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.416139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.416308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.416334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.416464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.416489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.416672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.416697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.416852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.416877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.417910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.417935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.418088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.418282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.418444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.418630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.418839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.418996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.419181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.419363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.419572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.419755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.419940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.419965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.420123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.420149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.420277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.420303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.420461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.420486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.908 [2024-07-25 00:14:18.420667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.908 [2024-07-25 00:14:18.420693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.908 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.420830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.420855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.421910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.421934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.422093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.422126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.422284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.422309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.422472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.422498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.422660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.422687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.422853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.422878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.423064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.423089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.423237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.423263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.423444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.423470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.423649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.423688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.423881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.423913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.424074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.424120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.424284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.424310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.424490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.424516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.424680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.424706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.424873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.425908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.425933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.426122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.426148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.426283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.426308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.426464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.426489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.426623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.426649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.426818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.426844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.427007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.427033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.427172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.427199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.909 [2024-07-25 00:14:18.427326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.909 [2024-07-25 00:14:18.427351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.909 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.427507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.427533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.427686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.427711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.427868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.427893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.428957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.428983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.429138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.429165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.429298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.429324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.429486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.429511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.429669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.429694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.429823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.429848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.430959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.430991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.431155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.431181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.431343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.431370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.431528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.431553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.431711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.431736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.431896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.431923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.432109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.432298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.432484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.432641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.432798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.432982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.433187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.433344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.433556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.433751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.433914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.910 [2024-07-25 00:14:18.433939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.910 qpair failed and we were unable to recover it. 00:34:12.910 [2024-07-25 00:14:18.434073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.434247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.434432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.434618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.434808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.434964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.434989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.435133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.435169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.435324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.435349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.435527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.435552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.435692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.435717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.435913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.435939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.436096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.436284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.436434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.436619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.436825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.436984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.437841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.437998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.438169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.438377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.438529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.438714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.438879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.438905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.439945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.439969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.440108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.440135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.440314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.440339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.911 [2024-07-25 00:14:18.440522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.911 [2024-07-25 00:14:18.440546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.911 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.440707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.440733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.440866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.440891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.441924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.441950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.442921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.442956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.443160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.443198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.443363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.443390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.443573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.443598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.443727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.443753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.443933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.443958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.444120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.444146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.444328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.444354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.444515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.444541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.444676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.444701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.444861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.444886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.445971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.445996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.446180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.446206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.446367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.446393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.446526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.446551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.446710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.446735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.446917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.446942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.447076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.912 [2024-07-25 00:14:18.447108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.912 qpair failed and we were unable to recover it. 00:34:12.912 [2024-07-25 00:14:18.447269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.447295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.447456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.447482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.447643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.447668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.447858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.447883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.448096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.448283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.448474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.448664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448812] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:12.913 [2024-07-25 00:14:18.448842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.448869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.448888] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.913 [2024-07-25 00:14:18.449024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.449049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.449237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.449274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.449439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.449466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.449624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.449649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.449805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.449830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.449984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.450207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.450370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.450565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.450777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.450964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.450992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.451150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.451176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.451342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.451368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.451551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.451577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.451736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.451762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.451917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.451942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.452124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.452164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.452307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.452333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.452492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.452518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.452655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.452680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.452844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.452870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.913 qpair failed and we were unable to recover it. 00:34:12.913 [2024-07-25 00:14:18.453033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.913 [2024-07-25 00:14:18.453058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.453251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.453277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.453435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.453461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.453646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.453671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.453829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.453855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.453988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.454170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.454334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.454525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.454707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.454916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.454941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.455111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.455137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.455273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.455298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.455459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.455484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.455625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.455650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.455831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.455856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.456947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.456972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.457119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.457146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.457333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.457359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.457514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.457540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.457709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.457734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.457921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.457946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.458111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.458142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.458297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.458323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.458472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.458498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.458660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.458685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.458841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.458867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.459031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.459056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.459251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.459277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.459462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.459487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.459649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.459675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.459832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.459857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.914 qpair failed and we were unable to recover it. 00:34:12.914 [2024-07-25 00:14:18.460021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.914 [2024-07-25 00:14:18.460047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.460208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.460235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.460376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.460401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.460534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.460560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.460723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.460749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.460935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.460960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.461130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.461168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.461338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.461365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.461502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.461528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.461718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.461744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.461882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.461907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.462941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.462966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.463095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.463133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.463316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.463340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.463499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.463524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.463687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.463714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.463876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.463904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.464964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.464991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.465181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.465206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.465365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.465390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.465549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.465574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.465737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.465763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.465919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.465944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.466115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.466142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.466300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.466326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.466501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.466526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.466680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.915 [2024-07-25 00:14:18.466705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.915 qpair failed and we were unable to recover it. 00:34:12.915 [2024-07-25 00:14:18.466861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.466886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.467930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.467954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.468113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.468139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.468297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.468323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.468501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.468526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.468705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.468730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.468910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.468935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.469120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.469284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.469466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.469648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.469834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.469996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.470166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.470324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.470534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.470699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.470882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.470907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.471067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.471092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.471256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.471281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.471441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.471466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.471623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.471648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.471805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.471830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.472861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.472886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.473024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.473049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.473239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.473266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.473424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.916 [2024-07-25 00:14:18.473449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.916 qpair failed and we were unable to recover it. 00:34:12.916 [2024-07-25 00:14:18.473600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.473625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.473786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.473811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.473965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.473990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.474179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.474204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.474366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.474392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.474544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.474569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.474755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.474781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.474966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.474991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.475954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.475979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.476146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.476171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.476351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.476376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.476540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.476565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.476705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.476730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.476907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.476932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.477880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.477905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.478059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.478098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.478322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.478349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.478509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.478534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.478701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.478728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.478892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.478917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.479052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.479078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.479265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.479291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.479455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.479482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.479668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.479693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.479877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.479902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.480061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.480086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.917 [2024-07-25 00:14:18.480278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-07-25 00:14:18.480303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.480464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.480489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.480646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.480676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.480814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.480840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.480994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.481201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.481361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.481570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.481775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.481957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.481983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.482145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.482172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.482313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.482339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.482500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.482540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.918 [2024-07-25 00:14:18.482721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.482746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.482910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.482935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.483075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.483111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.483271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.483296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.483431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.483456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.483634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.483660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.483819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.483844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.484892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.484918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.485080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.485114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.485268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.485293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.485449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.485474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.485661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.485686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.485820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.485845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-07-25 00:14:18.486007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-07-25 00:14:18.486031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.486181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.486207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.486348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.486373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.486568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.486595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.486731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.486756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.486908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.486933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.487144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.487306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.487483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.487685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.487846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.487976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.488004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.488199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.488226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.488363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.488396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.488552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.488577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.488791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.488816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.488980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.489135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.489314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.489508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.489693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.489870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.489894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.490077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.490268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.490455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.490639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.490822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.490980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.491139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.491303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.491487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.491668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.491829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.491854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.492035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.492060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.492212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.492238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.492371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.492397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.492585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-07-25 00:14:18.492610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-07-25 00:14:18.492770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.492795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.492953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.492978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.493165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.493191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.493344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.493369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.493525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.493550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.493730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.493755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.493909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.493934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.494974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.494999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.495164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.495190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.495352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.495384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.495536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.495561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.495698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.495723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.495849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.495874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.496972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.496998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.497155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.497181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.497332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.497358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.497548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.497574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.497694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.497719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.497888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.497913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.498899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.498924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.499112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.499138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-07-25 00:14:18.499272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-07-25 00:14:18.499297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.499460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.499486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.499661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.499686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.499845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.499869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.499993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.500202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.500390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.500547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.500731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.500913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.500939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.501154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.501331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.501494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.501682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.501842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.501998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.502199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.502352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.502510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.502696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.502882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.502906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.503069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.503094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.503264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.503289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.503451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.503476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.503663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.503688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.503874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.503899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.504081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.504262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.504481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.504662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.504833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.504987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.505164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.505354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.505508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.505660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.505862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.505887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.506049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-07-25 00:14:18.506073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-07-25 00:14:18.506213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.506239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.506420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.506445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.506600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.506625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.506783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.506808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.506933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.506959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.507117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.507306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.507473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.507650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.507832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.507992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.508180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.508347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.508535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.508690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.508867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.508892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.509110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.509322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.509479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.509622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.509811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.509990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.510182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.510334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.510517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.510704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.510863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.510888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.511889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.511914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.512068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.512093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.512235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.512261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.512428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.512453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.512608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.512633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-07-25 00:14:18.512822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-07-25 00:14:18.512847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.512984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.513852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.513983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.514170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.514354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.514563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.514747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.514953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.514978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.515129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.515156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.515335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.515360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.515518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.515542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.515696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.515721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.515858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.515884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.516074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.516099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.516293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.516318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.516475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.516500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.516678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.516703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.516858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.516883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.517963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.517989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.518153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.518180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.518338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.518363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.518543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.518568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.518749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.518774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.518923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.518948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.519083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.519114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-07-25 00:14:18.519272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-07-25 00:14:18.519297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.519442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.519467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.519630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.519657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.519810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.519835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.519991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.520171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.520358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.520546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.520724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.520908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.520933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.521112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.521296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.521468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.521676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.521855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.521992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.522019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.522182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.522208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.522366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.522365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.924 [2024-07-25 00:14:18.522391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.522552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.522578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.522759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.522784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.523886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.523911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.524956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.524982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.525165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.525190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.525340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.525365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.525520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.525546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.525673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.525698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.525856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.525881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.526017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.526042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.526209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.526235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.526366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.526392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.526552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.526577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-07-25 00:14:18.526705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-07-25 00:14:18.526735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.526903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.526929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.527962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.527987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.528147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.528173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.528330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.528357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.528540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.528565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.528694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.528719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.528887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.528913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.529051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.529076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.529242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.529268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.529456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.529480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.529608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.529632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.529792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.529817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.530920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.530945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.531074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.531099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.531291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.531316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.531453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.531478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.531657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.531703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.531896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.531932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.532111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.532146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.532360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.532392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.532635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.532667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.532844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.532885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda78000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.533117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.533303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.533482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.533669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.533844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.533984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.534011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.534204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.534230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.534359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.534389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.534560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.534585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.534728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-07-25 00:14:18.534756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-07-25 00:14:18.534914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.534940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.535163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.535190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.535327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.535352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.535524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.535549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.535713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.535739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.535871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.535896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.536057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.536241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.536490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.536681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.536842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.536985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.537184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.537351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.537528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.537709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.537895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.537920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.538959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.538985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.539115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.539142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.539287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.539313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.539486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.539511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.539638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.539663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.539846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.539871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.540970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.540995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.541190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.541216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.541374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.541399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.541579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.541604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.541729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.541759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.541913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.541939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.542106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.542132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.542281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.542306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.542506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.542531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-07-25 00:14:18.542689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-07-25 00:14:18.542714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.542876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.542901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.543898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.543923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.544945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.544970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.545969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.545994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.546161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.546187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.546314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.546339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.546504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.546530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.546689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.546714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.546870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.546896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.547055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.547081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.547294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.547319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.547485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.547511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-07-25 00:14:18.547641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-07-25 00:14:18.547666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.547821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.547846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.547986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.548165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.548347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.548516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.548679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.548857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.548887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.549934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.549959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.550122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.550148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.550309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.550334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.550469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.550494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.550683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.550709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.550875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.550900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.551949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.551975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.552130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.552156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.552294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.552320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.552479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-07-25 00:14:18.552504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-07-25 00:14:18.552660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.552685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.552836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.552861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.553916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.553941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.554094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.554127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.554290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.554315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.554500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.554525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.554682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.554707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.554887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.554912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.555948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.555975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.556137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.556168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.556354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.556380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.556508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.556535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.556660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.556686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.556844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.556869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.557962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.557987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.558149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.558175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.558362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.558405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.558592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.558629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.558783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.558809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.558967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.558992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.559128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.559154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-07-25 00:14:18.559337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.929 [2024-07-25 00:14:18.559365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.559534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.559571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.559742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.559768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.559900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.559927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.560079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.560113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.560278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.560304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.560494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.560532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.560708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.560736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.560866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.560891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.561078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.561113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.561305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.561345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.561519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.561546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.561733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.561759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.561941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.561966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.562100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.562133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.562294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.562320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:12.930 [2024-07-25 00:14:18.562505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.930 [2024-07-25 00:14:18.562531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:12.930 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.562748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.562799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.562973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.563199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.563398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.563570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.563745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.563931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.211 [2024-07-25 00:14:18.563963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.211 qpair failed and we were unable to recover it. 00:34:13.211 [2024-07-25 00:14:18.564126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.564154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.564292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.564318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.564465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.564491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.564630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.564656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.564787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.564813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.564969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.565178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.565364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.565550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.565761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.565945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.565978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.566165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.566203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.566364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.566398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda80000b90 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.566568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.566606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.566758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.566797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.566969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.566997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.567163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.567198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.567406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.567444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.567637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.567670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.567854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.567886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.568034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.568065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.568271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.568300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.568481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.568507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.568665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.568691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.568855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.568881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.569948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.569973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.570133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.570160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.570412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.570438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.570619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.570644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.570808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.570833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.570993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.212 [2024-07-25 00:14:18.571019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.212 qpair failed and we were unable to recover it. 00:34:13.212 [2024-07-25 00:14:18.571168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.571195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.571338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.571363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.571496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.571521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.571648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.571673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.571833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.571863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.572053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.572236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.572418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.572661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.572839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.572997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.573022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.573190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.573215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.573374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.573405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.573589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.573614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.573833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.573858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.574925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.574950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.575117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.575143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.575303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.575329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.575474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.575501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.575670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.575695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.575841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.575869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.576967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.576997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.577165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.577191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.577349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.577376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.577506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.577531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.213 qpair failed and we were unable to recover it. 00:34:13.213 [2024-07-25 00:14:18.577723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.213 [2024-07-25 00:14:18.577748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.577888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.577913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.578088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.578122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.578280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.578305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.578473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.578499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.578656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.578681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.578814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.578841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.579910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.579935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.580152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.580332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.580500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.580658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.580865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.580994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.581178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.581373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.581574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.581738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.581890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.581915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.582082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.582279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.582459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.582642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.582819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.582999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.583161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.583357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.583513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.583665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.583831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.583856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.214 [2024-07-25 00:14:18.584015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.214 [2024-07-25 00:14:18.584040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.214 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.584176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.584202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.584359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.584389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.584636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.584661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.584844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.584869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.585945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.585970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.586941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.586967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.587133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.587320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.587491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.587677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.587841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.587998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.588188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.588400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.588580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.588788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.588950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.588975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.589158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.589308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.589504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.589656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.589844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.589997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.590022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.590190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.590215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.590346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.590371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.590497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.215 [2024-07-25 00:14:18.590522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.215 qpair failed and we were unable to recover it. 00:34:13.215 [2024-07-25 00:14:18.590682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.590707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.590867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.590892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.591952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.591979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.592202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.592227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.592391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.592416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.592574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.592599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.592765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.592790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.592947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.592971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.593973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.593998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.594132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.594158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.594327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.594352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.594520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.594544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.594738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.594763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.594918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.594943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.595133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.595159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.595291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.595317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.595504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.595529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.595673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.595699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.595832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.216 [2024-07-25 00:14:18.595857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.216 qpair failed and we were unable to recover it. 00:34:13.216 [2024-07-25 00:14:18.596037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.596207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.596395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.596552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.596757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.596927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.596953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.597135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.597161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.597298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.597323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.597483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.597507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.597689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.597714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.597883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.597907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.598065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.598090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.598233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.598258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.598416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.598441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.598609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.598634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.598820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.598845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.599054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.599256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.599469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.599632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.599816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.599999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.600179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.600358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.600542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.600699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.600880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.600905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.601972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.601997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.602132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.602164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.602319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.602344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.602502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.602527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.217 [2024-07-25 00:14:18.602665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.217 [2024-07-25 00:14:18.602690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.217 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.602849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.602874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.603037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.603062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.603249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.603274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.604142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.604174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.604338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.604365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.604548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.604574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.604709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.604736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.604889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.604915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.605112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.605139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.605300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.605326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.605500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.605525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.605683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.605708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.605870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.605895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.606884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.606910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.607067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.607093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.607260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.607285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.607469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.607494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.607664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.607691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.607877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.607902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.608080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.608123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.608284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.608308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.608446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.608472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.608653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.608678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.608810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.608836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.609971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.609996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.610161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.610190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.218 [2024-07-25 00:14:18.610350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.218 [2024-07-25 00:14:18.610375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.218 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.610501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.610527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.610656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.610681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.610840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.610867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.611071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.611277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.611463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.611650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.611813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.611996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.612172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.612339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.612499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.612684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.612871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.612896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.613954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.613979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.614167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.614193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.614322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.614347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.614517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.614542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.614720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.614745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.614904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.614929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.615114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.615140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.615312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.615337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.615517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.615542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.615726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.615752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.615882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.615907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.616948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.616973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.219 [2024-07-25 00:14:18.617163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.219 [2024-07-25 00:14:18.617189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.219 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.617321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.617348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.617521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.617547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.617715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.617741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.617928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.617953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.618108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.618134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.618290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.618316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.618469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.618494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.618656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.618681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.618865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.618890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.619858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.619883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.620945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.620971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.621112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.220 [2024-07-25 00:14:18.621127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.621149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-07-25 00:14:18.621153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with at runtime. 00:34:13.220 addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.621175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.220 [2024-07-25 00:14:18.621196] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.220 [2024-07-25 00:14:18.621212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.220 [2024-07-25 00:14:18.621325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.621284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:13.220 [2024-07-25 00:14:18.621349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.621338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:13.220 [2024-07-25 00:14:18.621369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:13.220 [2024-07-25 00:14:18.621375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:13.220 [2024-07-25 00:14:18.621518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.621543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.621676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.621700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.621864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.621889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.622836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.220 qpair failed and we were unable to recover it. 00:34:13.220 [2024-07-25 00:14:18.622987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.220 [2024-07-25 00:14:18.623012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.623170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.623196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.623346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.623371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.623503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.623528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.623660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.623685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.623842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.623868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.624913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.624938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.625068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.625097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.625322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.625348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.625518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.625543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.625670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.625695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.625855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.625882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.626861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.626886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.627144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.627309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.627463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.627632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.627817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.627980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.628005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.628142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.628168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.628302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.628327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.221 qpair failed and we were unable to recover it. 00:34:13.221 [2024-07-25 00:14:18.628513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.221 [2024-07-25 00:14:18.628538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.628701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.628727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.628862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.628887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.629900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.629927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.630909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.630934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.631930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.631955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.632891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.632916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.633060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.633085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.633269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.633294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.633453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.633478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.633615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.633641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.633843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.633868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.634054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.634244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.634431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.634580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.222 [2024-07-25 00:14:18.634741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.222 qpair failed and we were unable to recover it. 00:34:13.222 [2024-07-25 00:14:18.634912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.634937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.635882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.635908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.636155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.636181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.636323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.636348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.636519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.636545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.636705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.636730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.636882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.636907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.637033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.637058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.637288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.637314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.637489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.637513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.637642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.637667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.637827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.637852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.638900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.638924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.639916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.639941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.640906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.640931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.641085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.641116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.641251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.223 [2024-07-25 00:14:18.641276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.223 qpair failed and we were unable to recover it. 00:34:13.223 [2024-07-25 00:14:18.641411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.641436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.641574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.641599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.641753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.641777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.641909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.641934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.642072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.642098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.642305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.642330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.642561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.642586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.642725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.642751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.642914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.642939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.643882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.643907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.644878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.644903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.645902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.645928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.646116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.646291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.646459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.646651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.646803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.646987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.647012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.647138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.647167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.647297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.647322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.647451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.647476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.224 [2024-07-25 00:14:18.647625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.224 [2024-07-25 00:14:18.647650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.224 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.647795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.647820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.647949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.647975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.648166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.648323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.648513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.648666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.648831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.648990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.649235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.649569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.649727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.649888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.649914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.650917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.650942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.651938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.651963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.652185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.652211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.652339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.652365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.652581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.652606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.652760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.652785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.652924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.652949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.653130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.653291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.653462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.653641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.653820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.653976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.654002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.225 [2024-07-25 00:14:18.654140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.225 [2024-07-25 00:14:18.654166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.225 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.654298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.654323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.654458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.654483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.654636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.654662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.654827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.654852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.655019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.655283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.655308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.655433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.655458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.655606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.655630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.655786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.655810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.655975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.656152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.656322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.656500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.656750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.656931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.656956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.657942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.657969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.658962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.658987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.659154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.659179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.659301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.659326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.659460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.659486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.226 [2024-07-25 00:14:18.659646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.226 [2024-07-25 00:14:18.659671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.226 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.659793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.659818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.659955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.659980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.660145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.660171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.660314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.660339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.660506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.660531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.660663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.660687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.660843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.660868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.661889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.661913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.662959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.662984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.663876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.663901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.664070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.664095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.664258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.664284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.664530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.664693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.664718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.664845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.664870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.665896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.665921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.227 [2024-07-25 00:14:18.666076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.227 [2024-07-25 00:14:18.666106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.227 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.666282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.666307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.666444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.666470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.666635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.666660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.666799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.666825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.666986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.667151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.667344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.667512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.667655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.667813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.667838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.668974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.668999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.669181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.669348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.669539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.669707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.669867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.669994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.670145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.670319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.670511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.670689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.670945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.670970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.671125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.671154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.671287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.671312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.671433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.671458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.671710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.671735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.671878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.671904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.672067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.672092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.672268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.672292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.672442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.672466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.228 [2024-07-25 00:14:18.672612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.228 [2024-07-25 00:14:18.672637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.228 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.672773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.672799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.672938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.672963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.673136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.673164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.673319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.673344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.673489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.673514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.673655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.673680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.673849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.673874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.674894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.674919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.675923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.675948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.676142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.676168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.676354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.676379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.676536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.676561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.676690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.676715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.676942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.676967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.677123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.677148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.677309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.677334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.677482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.677506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.677655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.677680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.677836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.677860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.678921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.678947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.679135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.679164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.229 qpair failed and we were unable to recover it. 00:34:13.229 [2024-07-25 00:14:18.679302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.229 [2024-07-25 00:14:18.679328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.679468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.679493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.679614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.679639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.679771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.679796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.679953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.679979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.680139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.680195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.680330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.680355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.680496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.680521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.680680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.680705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.680859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.680884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.681902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.681927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.682078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.682108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.682289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.682314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.682503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.682529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.682674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.682699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.682848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.682873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.683891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.683917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.684942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.684968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.685108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.685133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.685275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.685300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.685429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.685454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.685634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.685659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.230 qpair failed and we were unable to recover it. 00:34:13.230 [2024-07-25 00:14:18.685844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.230 [2024-07-25 00:14:18.685868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.686926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.686951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.687091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.687121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.687264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.687289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.687458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.687483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.687639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.687664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.687884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.687908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.688849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.688874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.689901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.689927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.690945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.690971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.691110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.691135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.231 qpair failed and we were unable to recover it. 00:34:13.231 [2024-07-25 00:14:18.691268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.231 [2024-07-25 00:14:18.691293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.691448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.691473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.691624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.691649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.691784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.691809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.691936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.691962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.692951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.692976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.693187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.693212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.693374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.693400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.693524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.693553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.693709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.693733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.693858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.693883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.694839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.694864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.695849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.695874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.696846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.696993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.697020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.697172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.697198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.697333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.697360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.232 qpair failed and we were unable to recover it. 00:34:13.232 [2024-07-25 00:14:18.697517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.232 [2024-07-25 00:14:18.697542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.697689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.697714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.697881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.697907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.698951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.698976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.699185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.699345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.699532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.699686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.699865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.699992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.700854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.700986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.701194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.701354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.701529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.701682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.701960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.701985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.702959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.702984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.703145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.703305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.703493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.703654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.233 [2024-07-25 00:14:18.703820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.233 qpair failed and we were unable to recover it. 00:34:13.233 [2024-07-25 00:14:18.703968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.703993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.704189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.704216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.704352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.704377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.704503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.704528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.704680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.704705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.704838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.704863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.705859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.705994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.706181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.706332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.706488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.706667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.706856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.706881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.707931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.707956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.708909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.708934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.709062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.709087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.709276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.709302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.709525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.709550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.709680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.709705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.709849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.709874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.710023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.710048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.710187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.234 [2024-07-25 00:14:18.710213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.234 qpair failed and we were unable to recover it. 00:34:13.234 [2024-07-25 00:14:18.710375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.710400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.710531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.710556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.710688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.710713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.710846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.710871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.710996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.711175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.711327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.711536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.711698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.711854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.711878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.712871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.712896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.713889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.713913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.714909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.714934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.715072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.715097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.715245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.715270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.715396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.715421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.235 [2024-07-25 00:14:18.715540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.235 [2024-07-25 00:14:18.715565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.235 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.715723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.715748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.715877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.715903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.716874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.716900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.717943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.717968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.718128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.718155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.718287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.718313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.718478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.718505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.718652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.718678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.718857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.718883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.719902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.719928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.720186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.720376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.720531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.720694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.720850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.720979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.721147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.721337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.721550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.721741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.236 qpair failed and we were unable to recover it. 00:34:13.236 [2024-07-25 00:14:18.721902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.236 [2024-07-25 00:14:18.721928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.722923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.722948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.723913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.723938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.724908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.724932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.725902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.725927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.726125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.726291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.726472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.726680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.726843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.726994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.727157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.727335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.727528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.727702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.727876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.727901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.728038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.237 [2024-07-25 00:14:18.728064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.237 qpair failed and we were unable to recover it. 00:34:13.237 [2024-07-25 00:14:18.728202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.728227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.728355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.728380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.728534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.728559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.728682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.728707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.728873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.728898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.729845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.729870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.730967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.730992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.731910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.731935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.732925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.732954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.733931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.733956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.734106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.238 [2024-07-25 00:14:18.734131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.238 qpair failed and we were unable to recover it. 00:34:13.238 [2024-07-25 00:14:18.734275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.734300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.734448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.734472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.734629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.734654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.734781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.734806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.734939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.734964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.735952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.735976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.736939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.736964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.737150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.737307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.737505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.737664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.737848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.737983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.738168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.738329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.738494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.738673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.738835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.738859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.739846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.739981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.740005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.740142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.740168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.740322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.239 [2024-07-25 00:14:18.740347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.239 qpair failed and we were unable to recover it. 00:34:13.239 [2024-07-25 00:14:18.740504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.740528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.740681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.740706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.740844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.740869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.741882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.741907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.742958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.742984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.743936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.743961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.744883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.744908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.745065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.745090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.745232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.745256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.745388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.745413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.745545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.745569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.240 qpair failed and we were unable to recover it. 00:34:13.240 [2024-07-25 00:14:18.745694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.240 [2024-07-25 00:14:18.745718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.745855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.745881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.746877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.746902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.747869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.747894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.748902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.748930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.749965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.749990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.750970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.750994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.751170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.751321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.751502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.751647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.241 [2024-07-25 00:14:18.751799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.241 qpair failed and we were unable to recover it. 00:34:13.241 [2024-07-25 00:14:18.751953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.751978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.752159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.752185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.752332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.752357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.752500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.752525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.752689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.752715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.752890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.752914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.753928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.753953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.754871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.754896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.755869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.755898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.756911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.756936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.757904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.757929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.758081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.242 [2024-07-25 00:14:18.758112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.242 qpair failed and we were unable to recover it. 00:34:13.242 [2024-07-25 00:14:18.758266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.758291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.758423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.758449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.758579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.758604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.758729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.758754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.758902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.758927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.759890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.759915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.760882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.760906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.761873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.761898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:13.243 [2024-07-25 00:14:18.762036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.762065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:13.243 [2024-07-25 00:14:18.762234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.762260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.762393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:13.243 [2024-07-25 00:14:18.762419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.243 [2024-07-25 00:14:18.762574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.762599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.243 [2024-07-25 00:14:18.762756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.762781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.762941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.762965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.243 qpair failed and we were unable to recover it. 00:34:13.243 [2024-07-25 00:14:18.763951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.243 [2024-07-25 00:14:18.763976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.764938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.764963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.765109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.765134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.765267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.765293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.765446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.765472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.765626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.765652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.765819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.765846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.766882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.766907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.767967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.767992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.768160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.768319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.768484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.768642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.768808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.768995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.769839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.769977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.770002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.770154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.770179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.244 qpair failed and we were unable to recover it. 00:34:13.244 [2024-07-25 00:14:18.770348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.244 [2024-07-25 00:14:18.770373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.770505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.770530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.770712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.770849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.770874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.771855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.771880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.772838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.772864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.773872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.773898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.774925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.774949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.775089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.775136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.775267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.775292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.775473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.775499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.775629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.245 [2024-07-25 00:14:18.775654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.245 qpair failed and we were unable to recover it. 00:34:13.245 [2024-07-25 00:14:18.775801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.775827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.775960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.775984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.776968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.776993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.777128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.777155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.777324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.777350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.777522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.777547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.777705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.777730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.777856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.777881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.778896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.778921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.779888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.779920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.780925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.780950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.246 [2024-07-25 00:14:18.781874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.246 [2024-07-25 00:14:18.781899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.246 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.782948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.782973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.783952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.783977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.784175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.784359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.784508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.784666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.784842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.784977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.785168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.785320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.785523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.785673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.785844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.785869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.786828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.786980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.787005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.787187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.247 [2024-07-25 00:14:18.787214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.787342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.787368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:13.247 [2024-07-25 00:14:18.787508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.787534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.787687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.787713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.247 [2024-07-25 00:14:18.787851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 [2024-07-25 00:14:18.787877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.247 qpair failed and we were unable to recover it. 00:34:13.247 [2024-07-25 00:14:18.788032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.247 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.248 [2024-07-25 00:14:18.788058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.788224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.788393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.788549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.788715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.788869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.788989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.789144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.789303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.789494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.789646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.789807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.789832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.790910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.790935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.791863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.791888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.792843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.792867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.793869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.793894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.248 [2024-07-25 00:14:18.794048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.248 [2024-07-25 00:14:18.794073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.248 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.794232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.794257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.794389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.794414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.794574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.794599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.794777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.794802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.794949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.794974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.795117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.795143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.795270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.795295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.795455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.795480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.795624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.795650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.795835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.795860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.796866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.796988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.797149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.797344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.797536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.797710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.797887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.797911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.798067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.798092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.798330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.798355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.798518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.798543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.798690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.798714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.798861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.798886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.799883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.799908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.800067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.800092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.800248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.800274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.800411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.800436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.249 [2024-07-25 00:14:18.800585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.249 [2024-07-25 00:14:18.800609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.249 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.800774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.800799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.800959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.800984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.801938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.801963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.802960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.802985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.803955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.803983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.804176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.804357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.804526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.804680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.804841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.804998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.805238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.805394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.805585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.805747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.250 [2024-07-25 00:14:18.805917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.250 [2024-07-25 00:14:18.805942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.250 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.806945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.806970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.807174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.807347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.807538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.807702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.807858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.807993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.808155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.808403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.808572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.808749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.808911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.808936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.809086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.809118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.809268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.809293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.809444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.809469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.809623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.809648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.809865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.809890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.810886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.810911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.811050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.811075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.811238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.811263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.811501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.811526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.811690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.811715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.811853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.811877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.812056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.812081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.812224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.812249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.812396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.251 [2024-07-25 00:14:18.812421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.251 qpair failed and we were unable to recover it. 00:34:13.251 [2024-07-25 00:14:18.812590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.812615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.812744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.812769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.812918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.812943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.813929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.813954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.814925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.814950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.815113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.815138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.815275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.815300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.815486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.815510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.815644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.815669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.815838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.815863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 Malloc0 00:34:13.252 [2024-07-25 00:14:18.816028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.816211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.252 [2024-07-25 00:14:18.816377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:13.252 [2024-07-25 00:14:18.816570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.252 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.252 [2024-07-25 00:14:18.816732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.816919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.816944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.817962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.817987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.818129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.818155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.818294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.818319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.818505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.818529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.252 qpair failed and we were unable to recover it. 00:34:13.252 [2024-07-25 00:14:18.818659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.252 [2024-07-25 00:14:18.818683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.818852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.818877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 [2024-07-25 00:14:18.819658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.819825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.819849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.820844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.820997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.821175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.821358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.821561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.821726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.821903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.821929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.822116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.822311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.822468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.822649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.822821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.822975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.823148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.823330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.823509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.823698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.823864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.823889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.253 [2024-07-25 00:14:18.824894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.253 [2024-07-25 00:14:18.824919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.253 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.825930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.825955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.826134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.826317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.826533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.826685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.826841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.826992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.827156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.827323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.827535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.827701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.254 [2024-07-25 00:14:18.827919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.827944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:13.254 [2024-07-25 00:14:18.828085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.828116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.254 [2024-07-25 00:14:18.828261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.254 [2024-07-25 00:14:18.828286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.828420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.828445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.828604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.828629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.828784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.828809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.828961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.828985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.829109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.829135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.829268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.829294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.829435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.829460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.829641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.829685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.829842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.829874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.830058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.830090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.830268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.830298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fda70000b90 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.830478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.830504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.830637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.830662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.830827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.830852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.831004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.831029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.831168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.831194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.831331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.831356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.831533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.831558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.254 qpair failed and we were unable to recover it. 00:34:13.254 [2024-07-25 00:14:18.831684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.254 [2024-07-25 00:14:18.831709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.831946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.831971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.832136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.832162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.832312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.832337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.832503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.832528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.832698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.832722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.832879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.832903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.833859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.833884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.834868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.834892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.835045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.835243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.835417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.835570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.835790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.255 [2024-07-25 00:14:18.835951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.835976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:13.255 [2024-07-25 00:14:18.836133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.836161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.255 [2024-07-25 00:14:18.836297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.255 [2024-07-25 00:14:18.836322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.836450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.836475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.836618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.836644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.836789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.836814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.836970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.836995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.837145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.837170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.255 [2024-07-25 00:14:18.837301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.255 [2024-07-25 00:14:18.837326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.255 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.837461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.837486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.837620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.837647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.837777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.837802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.837931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.837957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.838945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.838970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.839924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.839948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.840182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.840208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.840341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.840366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.840528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.840553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.840789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.840814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.840973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.840999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.841162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.841344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.841514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.841695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.841855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.841980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.842839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.842981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.843142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.843300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.843454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.843644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 [2024-07-25 00:14:18.843805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.256 [2024-07-25 00:14:18.843958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.256 [2024-07-25 00:14:18.843984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.256 qpair failed and we were unable to recover it. 00:34:13.256 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.257 [2024-07-25 00:14:18.844125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.257 [2024-07-25 00:14:18.844308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.844488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.844638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.844794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.844963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.844989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.845185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.845347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.845493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.845656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.845827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.845985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.846832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.846984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.257 [2024-07-25 00:14:18.847835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4570 with addr=10.0.0.2, port=4420 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.847867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.257 [2024-07-25 00:14:18.850442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.257 [2024-07-25 00:14:18.850611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.257 [2024-07-25 00:14:18.850639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.257 [2024-07-25 00:14:18.850654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.257 [2024-07-25 00:14:18.850667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.257 [2024-07-25 00:14:18.850700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.257 00:14:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 943880 00:34:13.257 [2024-07-25 00:14:18.860305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.257 [2024-07-25 00:14:18.860440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.257 [2024-07-25 00:14:18.860467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.257 [2024-07-25 00:14:18.860481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.257 [2024-07-25 00:14:18.860494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.257 [2024-07-25 00:14:18.860523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.870299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.257 [2024-07-25 00:14:18.870433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.257 [2024-07-25 00:14:18.870459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.257 [2024-07-25 00:14:18.870474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.257 [2024-07-25 00:14:18.870486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.257 [2024-07-25 00:14:18.870514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.880309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.257 [2024-07-25 00:14:18.880449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.257 [2024-07-25 00:14:18.880480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.257 [2024-07-25 00:14:18.880496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.257 [2024-07-25 00:14:18.880508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.257 [2024-07-25 00:14:18.880537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.890309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.257 [2024-07-25 00:14:18.890440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.257 [2024-07-25 00:14:18.890466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.257 [2024-07-25 00:14:18.890480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.257 [2024-07-25 00:14:18.890493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.257 [2024-07-25 00:14:18.890521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.257 qpair failed and we were unable to recover it. 00:34:13.257 [2024-07-25 00:14:18.900311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.258 [2024-07-25 00:14:18.900443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.258 [2024-07-25 00:14:18.900469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.258 [2024-07-25 00:14:18.900482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.258 [2024-07-25 00:14:18.900495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.258 [2024-07-25 00:14:18.900522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.258 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.910355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.910508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.910537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.910552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.910565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.910595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.920357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.920493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.920519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.920534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.920547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.920579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.930347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.930483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.930510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.930524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.930537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.930565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.940380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.940513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.940540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.940554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.940568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.940595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.950424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.950563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.950588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.950603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.950618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.950646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.960431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.960584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.960610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.960625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.960639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.960669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.970506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.970665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.970701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.517 [2024-07-25 00:14:18.970717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.517 [2024-07-25 00:14:18.970730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.517 [2024-07-25 00:14:18.970759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-25 00:14:18.980542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.517 [2024-07-25 00:14:18.980683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.517 [2024-07-25 00:14:18.980711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:18.980726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:18.980739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:18.980767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:18.990571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:18.990709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:18.990735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:18.990749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:18.990762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:18.990790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.000574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.000724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.000750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.000764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.000777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.000805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.010606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.010743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.010769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.010783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.010801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.010830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.020701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.020860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.020885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.020899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.020912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.020939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.030637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.030795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.030820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.030835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.030849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.030877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.040656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.040790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.040816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.040830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.040844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.040873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.050705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.050835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.050860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.050874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.050887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.050914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.060771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.060901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.060927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.060941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.060954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.060982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.070763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.070895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.070920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.070934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.070947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.070975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.080783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.080923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.080954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.080968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.080981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.081011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.090870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.091009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.091034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.091049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.091062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.091089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.100862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.518 [2024-07-25 00:14:19.100983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.518 [2024-07-25 00:14:19.101009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.518 [2024-07-25 00:14:19.101023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.518 [2024-07-25 00:14:19.101041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.518 [2024-07-25 00:14:19.101069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-25 00:14:19.110884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.111036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.111061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.111075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.111091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.111125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.120924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.121068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.121093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.121119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.121133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.121166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.130961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.131095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.131128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.131143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.131156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.131184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.141013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.141164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.141190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.141204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.141217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.141246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.150991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.151128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.151154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.151169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.151181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.151209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.161038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.161229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.161254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.161268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.161280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.161309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.171049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.171196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.171222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.171235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.171249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.171277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.181061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.181222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.181247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.181261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.181274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.181302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.191091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.191257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.191283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.191297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.191316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.191345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.201136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.201273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.201298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.201313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.201326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.201353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.211153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.211285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.211310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.211324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.211337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.211366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.221182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.221316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.221341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.221355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.221368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.221396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-25 00:14:19.231218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.519 [2024-07-25 00:14:19.231365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.519 [2024-07-25 00:14:19.231393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.519 [2024-07-25 00:14:19.231408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.519 [2024-07-25 00:14:19.231421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.519 [2024-07-25 00:14:19.231449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.241247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.241387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.241415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.241429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.241443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.241472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.251285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.251462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.251488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.251503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.251516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.251543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.261303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.261431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.261457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.261471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.261484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.261512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.271338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.271496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.271522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.271536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.271549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.271576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.281371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.281505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.281530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.281544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.281563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.281591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.291417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.291563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.291589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.291603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.291616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.291646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.301417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.301547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.301572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.301586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.301599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.301626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.311461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.311602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.311627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.311642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.311655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.311682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.321499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.321636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.321662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.321676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.321688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.321716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.331533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.331672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.331697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.331711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.331724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.331751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.341548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.341676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.341702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.341717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.341729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.779 [2024-07-25 00:14:19.341757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.779 qpair failed and we were unable to recover it. 00:34:13.779 [2024-07-25 00:14:19.351550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.779 [2024-07-25 00:14:19.351675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.779 [2024-07-25 00:14:19.351701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.779 [2024-07-25 00:14:19.351715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.779 [2024-07-25 00:14:19.351728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.351755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.361721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.361860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.361886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.361899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.361913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.361940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.371625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.371804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.371829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.371849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.371863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.371891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.381652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.381789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.381815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.381829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.381842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.381870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.391709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.391843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.391868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.391882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.391895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.391923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.401724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.401903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.401929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.401943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.401956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.401983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.411743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.411879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.411904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.411918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.411930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.411957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.421763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.421893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.421919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.421933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.421946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.421974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.431788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.431919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.431944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.431958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.431970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.431998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.441847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.442027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.442052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.442066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.442078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.442114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.451875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.452060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.452085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.452098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.452121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.452150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.461904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.462051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.462076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.462096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.462117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.462153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.472020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.472167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.472193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.780 [2024-07-25 00:14:19.472206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.780 [2024-07-25 00:14:19.472219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.780 [2024-07-25 00:14:19.472247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.780 qpair failed and we were unable to recover it. 00:34:13.780 [2024-07-25 00:14:19.481972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.780 [2024-07-25 00:14:19.482119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.780 [2024-07-25 00:14:19.482145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.781 [2024-07-25 00:14:19.482159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.781 [2024-07-25 00:14:19.482171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.781 [2024-07-25 00:14:19.482199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.781 qpair failed and we were unable to recover it. 00:34:13.781 [2024-07-25 00:14:19.491992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.781 [2024-07-25 00:14:19.492201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.781 [2024-07-25 00:14:19.492237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.781 [2024-07-25 00:14:19.492261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.781 [2024-07-25 00:14:19.492277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:13.781 [2024-07-25 00:14:19.492307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.781 qpair failed and we were unable to recover it. 00:34:14.039 [2024-07-25 00:14:19.502097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.039 [2024-07-25 00:14:19.502240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.039 [2024-07-25 00:14:19.502268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.039 [2024-07-25 00:14:19.502282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.039 [2024-07-25 00:14:19.502295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.039 [2024-07-25 00:14:19.502324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.039 qpair failed and we were unable to recover it. 00:34:14.039 [2024-07-25 00:14:19.512005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.039 [2024-07-25 00:14:19.512162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.039 [2024-07-25 00:14:19.512197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.039 [2024-07-25 00:14:19.512212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.039 [2024-07-25 00:14:19.512224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.039 [2024-07-25 00:14:19.512253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.039 qpair failed and we were unable to recover it. 00:34:14.039 [2024-07-25 00:14:19.522050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.039 [2024-07-25 00:14:19.522233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.039 [2024-07-25 00:14:19.522259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.039 [2024-07-25 00:14:19.522273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.039 [2024-07-25 00:14:19.522286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.522313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.532080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.532226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.532252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.532266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.532281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.532309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.542135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.542299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.542324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.542338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.542350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.542378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.552154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.552292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.552317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.552337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.552351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.552378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.562190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.562333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.562358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.562372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.562384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.562412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.572172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.572302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.572327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.572341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.572353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.572380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.582217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.582397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.582425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.582454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.582466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.582493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.592256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.592386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.592412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.592425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.592437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.592465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.602273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.602415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.602440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.602454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.602466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.602493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.612286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.612416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.612441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.612455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.612467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.612495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.622338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.622468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.622493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.622507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.622519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.622546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.632339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.632471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.632495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.632509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.632522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.632549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.642524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.642740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.040 [2024-07-25 00:14:19.642774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.040 [2024-07-25 00:14:19.642789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.040 [2024-07-25 00:14:19.642801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.040 [2024-07-25 00:14:19.642829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.040 qpair failed and we were unable to recover it. 00:34:14.040 [2024-07-25 00:14:19.652468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.040 [2024-07-25 00:14:19.652605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.652631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.652645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.652658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.652686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.662473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.662614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.662639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.662653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.662666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.662693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.672565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.672732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.672758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.672772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.672784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.672811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.682507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.682701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.682729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.682744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.682757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.682785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.692579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.692747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.692772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.692786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.692814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.692842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.702549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.702688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.702715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.702729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.702741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.702769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.712563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.712725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.712750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.712764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.712776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.712804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.722603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.722762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.722787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.722801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.722813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.722840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.732663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.732804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.732835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.732850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.732862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.732905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.742730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.742863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.742888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.742902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.742915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.742942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.041 [2024-07-25 00:14:19.752684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.041 [2024-07-25 00:14:19.752818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.041 [2024-07-25 00:14:19.752846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.041 [2024-07-25 00:14:19.752861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.041 [2024-07-25 00:14:19.752873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.041 [2024-07-25 00:14:19.752902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.041 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.762729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.762876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.762903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.762918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.762931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.762960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.772722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.772858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.772884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.772898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.772910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.772943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.782750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.782888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.782913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.782928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.782940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.782967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.792775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.792904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.792929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.792943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.792955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.792982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.802863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.803010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.803035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.803049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.803061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.803089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.812844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.812973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.812999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.813013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.813025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.813052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.822971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.823113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.823153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.823168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.823180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.823208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.832897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.833032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.833057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.833071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.833083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.833117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.842946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.843084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.843117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.843133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.843146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.843173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.852962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.853143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.853178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.853192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.853205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.853233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.862988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.863126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.863152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.863166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.863178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.863212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.301 qpair failed and we were unable to recover it. 00:34:14.301 [2024-07-25 00:14:19.873026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.301 [2024-07-25 00:14:19.873162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.301 [2024-07-25 00:14:19.873188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.301 [2024-07-25 00:14:19.873201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.301 [2024-07-25 00:14:19.873214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.301 [2024-07-25 00:14:19.873242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.883049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.883197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.883221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.883235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.883247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.883274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.893064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.893207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.893237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.893252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.893264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.893292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.903082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.903231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.903258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.903273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.903286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.903315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.913131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.913291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.913322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.913338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.913351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.913379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.923161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.923299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.923326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.923340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.923353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.923381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.933206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.933374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.933400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.933415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.933427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.933454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.943205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.943345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.943372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.943386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.943399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.943427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.953255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.953402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.953428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.953443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.953456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.953504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.963276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.963417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.963444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.963458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.963471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.963500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.973305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.973487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.973513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.973529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.973542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.973570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.983326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.983456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.983482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.983496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.983509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.983538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:19.993394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:19.993520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:19.993546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.302 [2024-07-25 00:14:19.993561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.302 [2024-07-25 00:14:19.993575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.302 [2024-07-25 00:14:19.993604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.302 qpair failed and we were unable to recover it. 00:34:14.302 [2024-07-25 00:14:20.003395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.302 [2024-07-25 00:14:20.003530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.302 [2024-07-25 00:14:20.003563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.303 [2024-07-25 00:14:20.003580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.303 [2024-07-25 00:14:20.003593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.303 [2024-07-25 00:14:20.003623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.303 qpair failed and we were unable to recover it. 00:34:14.303 [2024-07-25 00:14:20.013418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.303 [2024-07-25 00:14:20.013570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.303 [2024-07-25 00:14:20.013599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.303 [2024-07-25 00:14:20.013615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.303 [2024-07-25 00:14:20.013629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.303 [2024-07-25 00:14:20.013658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.303 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.023564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.023711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.023740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.023758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.023771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.023816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.033551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.033684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.033714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.033730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.033744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.033789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.043528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.043694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.043725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.043763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.043786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.043831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.053560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.053740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.053768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.053784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.053797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.053825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.063602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.063746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.063773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.063789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.063801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.063846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.073639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.073783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.073809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.073825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.073837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.073865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.083670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.083816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.083842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.083856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.083870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.083898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.093647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.093788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.093815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.093829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.093843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.093871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.103683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.103813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.562 [2024-07-25 00:14:20.103839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.562 [2024-07-25 00:14:20.103854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.562 [2024-07-25 00:14:20.103867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.562 [2024-07-25 00:14:20.103896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.562 qpair failed and we were unable to recover it. 00:34:14.562 [2024-07-25 00:14:20.113768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.562 [2024-07-25 00:14:20.113901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.113928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.113944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.113958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.113986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.123735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.123874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.123900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.123915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.123929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.123957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.133776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.133903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.133929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.133944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.133965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.133995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.143828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.143969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.143997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.144012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.144027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.144061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.153844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.154030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.154072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.154088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.154108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.154154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.163866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.164009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.164034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.164049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.164062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.164091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.173885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.174026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.174052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.174067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.174080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.174116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.183933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.184090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.184123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.184147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.184160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.184189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.193950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.194085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.194119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.194147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.194161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.194189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.203990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.204140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.204165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.204180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.204194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.204222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.214002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.214158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.214187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.214205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.214218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.214248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.224172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.224309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.224335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.224349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.224374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.224404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.563 qpair failed and we were unable to recover it. 00:34:14.563 [2024-07-25 00:14:20.234071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.563 [2024-07-25 00:14:20.234249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.563 [2024-07-25 00:14:20.234275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.563 [2024-07-25 00:14:20.234289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.563 [2024-07-25 00:14:20.234303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.563 [2024-07-25 00:14:20.234332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.564 qpair failed and we were unable to recover it. 00:34:14.564 [2024-07-25 00:14:20.244129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.564 [2024-07-25 00:14:20.244311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.564 [2024-07-25 00:14:20.244337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.564 [2024-07-25 00:14:20.244352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.564 [2024-07-25 00:14:20.244365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.564 [2024-07-25 00:14:20.244394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.564 qpair failed and we were unable to recover it. 00:34:14.564 [2024-07-25 00:14:20.254129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.564 [2024-07-25 00:14:20.254276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.564 [2024-07-25 00:14:20.254303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.564 [2024-07-25 00:14:20.254318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.564 [2024-07-25 00:14:20.254332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.564 [2024-07-25 00:14:20.254360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.564 qpair failed and we were unable to recover it. 00:34:14.564 [2024-07-25 00:14:20.264174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.564 [2024-07-25 00:14:20.264343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.564 [2024-07-25 00:14:20.264369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.564 [2024-07-25 00:14:20.264384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.564 [2024-07-25 00:14:20.264397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.564 [2024-07-25 00:14:20.264426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.564 qpair failed and we were unable to recover it. 00:34:14.564 [2024-07-25 00:14:20.274179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.564 [2024-07-25 00:14:20.274320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.564 [2024-07-25 00:14:20.274350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.564 [2024-07-25 00:14:20.274365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.564 [2024-07-25 00:14:20.274379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.564 [2024-07-25 00:14:20.274408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.564 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.284254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.284397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.284427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.284443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.284457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.284486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.294235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.294382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.294410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.294425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.294439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.294469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.304271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.304399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.304425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.304441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.304455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.304484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.314327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.314473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.314500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.314521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.314535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.314579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.324332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.324471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.324498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.324513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.324526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.324555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.334388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.334523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.334548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.334564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.334578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.334621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.344431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.344585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.344616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.344633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.344662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.344691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.354407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.354541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.354567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.354584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.354597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.354641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.364543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.364676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.364703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.364718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.364732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.364760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.374477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.374625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.823 [2024-07-25 00:14:20.374652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.823 [2024-07-25 00:14:20.374668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.823 [2024-07-25 00:14:20.374681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.823 [2024-07-25 00:14:20.374724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-25 00:14:20.384555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.823 [2024-07-25 00:14:20.384733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.384759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.384789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.384802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.384830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.394568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.394702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.394728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.394743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.394756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.394800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.404562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.404699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.404725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.404746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.404760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.404789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.414589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.414721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.414748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.414764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.414777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.414805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.424608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.424741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.424767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.424783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.424796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.424825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.434672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.434848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.434875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.434906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.434919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.434948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.444678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.444811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.444838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.444853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.444866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.444895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.454695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.454830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.454855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.454870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.454883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.454910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.464761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.464902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.464928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.464943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.464957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.464985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.474757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.474934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.474986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.475002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.475016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.475059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.484821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.484960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.484986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.485001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.485014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.824 [2024-07-25 00:14:20.485042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-25 00:14:20.494834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.824 [2024-07-25 00:14:20.494973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.824 [2024-07-25 00:14:20.495002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.824 [2024-07-25 00:14:20.495024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.824 [2024-07-25 00:14:20.495037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.825 [2024-07-25 00:14:20.495080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-25 00:14:20.504825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.825 [2024-07-25 00:14:20.504969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.825 [2024-07-25 00:14:20.504995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.825 [2024-07-25 00:14:20.505011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.825 [2024-07-25 00:14:20.505024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.825 [2024-07-25 00:14:20.505067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-25 00:14:20.514906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.825 [2024-07-25 00:14:20.515071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.825 [2024-07-25 00:14:20.515097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.825 [2024-07-25 00:14:20.515121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.825 [2024-07-25 00:14:20.515135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.825 [2024-07-25 00:14:20.515164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-25 00:14:20.524909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.825 [2024-07-25 00:14:20.525074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.825 [2024-07-25 00:14:20.525100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.825 [2024-07-25 00:14:20.525123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.825 [2024-07-25 00:14:20.525136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.825 [2024-07-25 00:14:20.525165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-25 00:14:20.534916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.825 [2024-07-25 00:14:20.535053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.825 [2024-07-25 00:14:20.535081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.825 [2024-07-25 00:14:20.535097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.825 [2024-07-25 00:14:20.535120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:14.825 [2024-07-25 00:14:20.535152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.825 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.545028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.545165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.545194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.545209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.545223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.545252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.554963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.555089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.555125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.555141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.555154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.555184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.565144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.565282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.565308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.565323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.565336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.565365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.575032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.575162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.575188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.575203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.575216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.575245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.585120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.585253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.585284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.585300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.585313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.585344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.595125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.595281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.083 [2024-07-25 00:14:20.595307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.083 [2024-07-25 00:14:20.595322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.083 [2024-07-25 00:14:20.595335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.083 [2024-07-25 00:14:20.595365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.083 qpair failed and we were unable to recover it. 00:34:15.083 [2024-07-25 00:14:20.605162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.083 [2024-07-25 00:14:20.605303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.605330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.605345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.605358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.605388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.615222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.615354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.615380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.615395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.615408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.615437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.625177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.625340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.625366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.625381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.625394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.625424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.635199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.635324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.635349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.635364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.635377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.635407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.645250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.645380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.645405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.645420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.645433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.645461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.655301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.655438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.655464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.655494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.655509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.655537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.665352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.665508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.665533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.665548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.665561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.665590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.675431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.675581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.675612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.675628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.675642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.675687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.685362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.685513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.685539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.685557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.685573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.685602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.695378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.695516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.695542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.695557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.695570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.695600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.705405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.705535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.705561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.705576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.705588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.705618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.715419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.715551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.715577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.715591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.715604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.715639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.084 [2024-07-25 00:14:20.725467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.084 [2024-07-25 00:14:20.725605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.084 [2024-07-25 00:14:20.725631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.084 [2024-07-25 00:14:20.725645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.084 [2024-07-25 00:14:20.725658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.084 [2024-07-25 00:14:20.725687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.084 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.735581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.735709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.735734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.735749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.735762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.735791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.745578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.745712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.745739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.745757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.745772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.745816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.755553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.755681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.755708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.755722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.755735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.755765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.765583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.765733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.765763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.765779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.765794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.765822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.775595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.775730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.775755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.775770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.775783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.775813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.785655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.785790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.785815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.785830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.785843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.785888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-07-25 00:14:20.795662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-07-25 00:14:20.795817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-07-25 00:14:20.795845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-07-25 00:14:20.795861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-07-25 00:14:20.795876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.085 [2024-07-25 00:14:20.795906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.805701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.805835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.805863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.805879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.805893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.805929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.815740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.815877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.815905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.815924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.815938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.815984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.825735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.825869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.825896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.825911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.825925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.825954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.835746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.835881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.835908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.835923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.835936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.835966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.845785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.845928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.845954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.845969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.845982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.846012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.855939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.856089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.856130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.856147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.856161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.856190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.865843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.865977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.866002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.866017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.866031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.344 [2024-07-25 00:14:20.866059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.344 qpair failed and we were unable to recover it. 00:34:15.344 [2024-07-25 00:14:20.875876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.344 [2024-07-25 00:14:20.876017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.344 [2024-07-25 00:14:20.876044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.344 [2024-07-25 00:14:20.876058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.344 [2024-07-25 00:14:20.876072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.876100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.885926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.886068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.886098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.886120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.886134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.886163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.895928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.896078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.896110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.896128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.896142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.896183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.905958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.906093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.906128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.906144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.906158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.906187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.916010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.916164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.916191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.916206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.916220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.916249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.926126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.926271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.926296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.926312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.926326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.926354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.936074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.936245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.936271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.936286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.936300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.936328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.946087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.946230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.946261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.946277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.946291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.946319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.956089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.956238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.956263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.956279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.956292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.956320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.966161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.966320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.966346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.966361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.966375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.966403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.976163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.976305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.976331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.976346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.976359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.976398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.986187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.986321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.986346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.986361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.986380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.986417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:20.996258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:20.996440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:20.996481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:20.996496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.345 [2024-07-25 00:14:20.996510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.345 [2024-07-25 00:14:20.996552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.345 qpair failed and we were unable to recover it. 00:34:15.345 [2024-07-25 00:14:21.006262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.345 [2024-07-25 00:14:21.006432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.345 [2024-07-25 00:14:21.006458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.345 [2024-07-25 00:14:21.006472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.006486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.006514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.346 [2024-07-25 00:14:21.016278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.346 [2024-07-25 00:14:21.016422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.346 [2024-07-25 00:14:21.016448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.346 [2024-07-25 00:14:21.016463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.016477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.016505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.346 [2024-07-25 00:14:21.026308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.346 [2024-07-25 00:14:21.026484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.346 [2024-07-25 00:14:21.026510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.346 [2024-07-25 00:14:21.026524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.026553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.026581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.346 [2024-07-25 00:14:21.036353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.346 [2024-07-25 00:14:21.036534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.346 [2024-07-25 00:14:21.036561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.346 [2024-07-25 00:14:21.036575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.036589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.036617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.346 [2024-07-25 00:14:21.046395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.346 [2024-07-25 00:14:21.046534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.346 [2024-07-25 00:14:21.046560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.346 [2024-07-25 00:14:21.046575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.046588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.046616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.346 [2024-07-25 00:14:21.056390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.346 [2024-07-25 00:14:21.056541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.346 [2024-07-25 00:14:21.056569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.346 [2024-07-25 00:14:21.056585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.346 [2024-07-25 00:14:21.056599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.346 [2024-07-25 00:14:21.056628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.346 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.066422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.066561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.066590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.066605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.066619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.066664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.076470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.076623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.076651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.076670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.076705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.076735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.086478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.086626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.086652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.086667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.086681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.086709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.096548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.096687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.096712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.096726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.096740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.096768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.106542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.106721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.106747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.106761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.106775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.106803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.116538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.116679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.116705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.116720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.116733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.116761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.126585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.126742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.126768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.126783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.126796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.126825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.136671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.136816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.136845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.136863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.136877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.136908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.146638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.146770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.146796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.146811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.146825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.146854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.156703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.156842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.156868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.156883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.156898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.605 [2024-07-25 00:14:21.156926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.605 qpair failed and we were unable to recover it. 00:34:15.605 [2024-07-25 00:14:21.166736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.605 [2024-07-25 00:14:21.166911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.605 [2024-07-25 00:14:21.166937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.605 [2024-07-25 00:14:21.166952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.605 [2024-07-25 00:14:21.166972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.167002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.176752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.176897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.176923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.176938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.176954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.176999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.186798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.186930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.186956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.186971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.186985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.187013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.196885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.197021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.197047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.197062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.197083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.197120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.206825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.206967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.206993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.207007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.207021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.207049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.216848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.216989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.217015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.217030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.217043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.217072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.226928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.227113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.227139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.227153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.227167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.227195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.237000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.237142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.237168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.237183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.237197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.237225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.246980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.247172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.247198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.247213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.247227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.247256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.257138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.257277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.257303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.257324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.257339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.257369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.267030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.267188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.267214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.267229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.267243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.267272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.277096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.277249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.277275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.277290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.277304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.606 [2024-07-25 00:14:21.277333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.606 qpair failed and we were unable to recover it. 00:34:15.606 [2024-07-25 00:14:21.287068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.606 [2024-07-25 00:14:21.287220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.606 [2024-07-25 00:14:21.287246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.606 [2024-07-25 00:14:21.287261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.606 [2024-07-25 00:14:21.287276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.607 [2024-07-25 00:14:21.287304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.607 qpair failed and we were unable to recover it. 00:34:15.607 [2024-07-25 00:14:21.297100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.607 [2024-07-25 00:14:21.297242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.607 [2024-07-25 00:14:21.297268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.607 [2024-07-25 00:14:21.297283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.607 [2024-07-25 00:14:21.297297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.607 [2024-07-25 00:14:21.297325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.607 qpair failed and we were unable to recover it. 00:34:15.607 [2024-07-25 00:14:21.307120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.607 [2024-07-25 00:14:21.307259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.607 [2024-07-25 00:14:21.307286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.607 [2024-07-25 00:14:21.307301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.607 [2024-07-25 00:14:21.307315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.607 [2024-07-25 00:14:21.307344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.607 qpair failed and we were unable to recover it. 00:34:15.607 [2024-07-25 00:14:21.317141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.607 [2024-07-25 00:14:21.317278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.607 [2024-07-25 00:14:21.317307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.607 [2024-07-25 00:14:21.317322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.607 [2024-07-25 00:14:21.317337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.607 [2024-07-25 00:14:21.317366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.607 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.327208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.327374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.327412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.327428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.327443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.327472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.337308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.337452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.337479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.337494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.337508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.337537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.347259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.347402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.347429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.347450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.347464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.347501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.357252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.357382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.357408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.357423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.357437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.357469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.367319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.367458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.367485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.367500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.367513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.367543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.377337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.377484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.377510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.377525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.377538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.377581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.387362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.387506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.387532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.387548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.387562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.387604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.866 [2024-07-25 00:14:21.397413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.866 [2024-07-25 00:14:21.397545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.866 [2024-07-25 00:14:21.397571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.866 [2024-07-25 00:14:21.397585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.866 [2024-07-25 00:14:21.397599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.866 [2024-07-25 00:14:21.397628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.866 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.407439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.407578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.407603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.407618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.407631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.407659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.417432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.417583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.417608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.417623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.417635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.417664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.427479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.427618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.427644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.427659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.427673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.427701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.437520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.437703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.437728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.437749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.437764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.437792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.447541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.447685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.447711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.447726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.447740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.447768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.457597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.457776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.457801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.457815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.457827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.457855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.467637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.467777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.467803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.467817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.467830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.467859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.477629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.477768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.477794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.477809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.477823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.477851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.487650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.487801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.487825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.487839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.487853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.487881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.497682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.497820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.497845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.497860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.497874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.497903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.507695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.507832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.507857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.507871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.507885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.507914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.517733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.517876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.517901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.517916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.517933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.867 [2024-07-25 00:14:21.517962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.867 qpair failed and we were unable to recover it. 00:34:15.867 [2024-07-25 00:14:21.527752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.867 [2024-07-25 00:14:21.527896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.867 [2024-07-25 00:14:21.527927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.867 [2024-07-25 00:14:21.527943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.867 [2024-07-25 00:14:21.527957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.527985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:15.868 [2024-07-25 00:14:21.537766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.868 [2024-07-25 00:14:21.537921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.868 [2024-07-25 00:14:21.537948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.868 [2024-07-25 00:14:21.537962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.868 [2024-07-25 00:14:21.537976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.538005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:15.868 [2024-07-25 00:14:21.547820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.868 [2024-07-25 00:14:21.547955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.868 [2024-07-25 00:14:21.547981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.868 [2024-07-25 00:14:21.547996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.868 [2024-07-25 00:14:21.548009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.548038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:15.868 [2024-07-25 00:14:21.557904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.868 [2024-07-25 00:14:21.558041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.868 [2024-07-25 00:14:21.558067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.868 [2024-07-25 00:14:21.558082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.868 [2024-07-25 00:14:21.558096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.558133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:15.868 [2024-07-25 00:14:21.567900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.868 [2024-07-25 00:14:21.568045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.868 [2024-07-25 00:14:21.568072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.868 [2024-07-25 00:14:21.568087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.868 [2024-07-25 00:14:21.568110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.568141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:15.868 [2024-07-25 00:14:21.577895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.868 [2024-07-25 00:14:21.578035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.868 [2024-07-25 00:14:21.578063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.868 [2024-07-25 00:14:21.578079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.868 [2024-07-25 00:14:21.578093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:15.868 [2024-07-25 00:14:21.578131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.868 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.587917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.588054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.588084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.588099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.588122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.588152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.597944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.598084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.598122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.598138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.598153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.598182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.607987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.608134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.608160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.608175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.608188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.608217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.618098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.618250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.618284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.618300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.618313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.618342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.628158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.628297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.628323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.628338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.628352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.628381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.638047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.638188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.638214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.638229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.638242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.638271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.648227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.648361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.648388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.648404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.127 [2024-07-25 00:14:21.648417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.127 [2024-07-25 00:14:21.648445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.127 qpair failed and we were unable to recover it. 00:34:16.127 [2024-07-25 00:14:21.658127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.127 [2024-07-25 00:14:21.658285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.127 [2024-07-25 00:14:21.658312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.127 [2024-07-25 00:14:21.658328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.658341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.658376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.668190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.668325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.668351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.668367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.668380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.668409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.678173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.678328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.678355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.678371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.678385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.678414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.688263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.688401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.688428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.688443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.688457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.688486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.698245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.698387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.698414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.698429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.698442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.698471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.708260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.708396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.708427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.708444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.708458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.708487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.718298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.718430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.718458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.718473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.718488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.718517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.728370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.728505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.728532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.728547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.728561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.728590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.738356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.738495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.738522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.738538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.738551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.738580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.748395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.748532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.748558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.748573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.748586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.748634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.758414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.758547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.758577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.758593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.758607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.758650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.768447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.768584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.768611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.768626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.768639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.768667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.778502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.778652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.128 [2024-07-25 00:14:21.778680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.128 [2024-07-25 00:14:21.778695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.128 [2024-07-25 00:14:21.778708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.128 [2024-07-25 00:14:21.778737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.128 qpair failed and we were unable to recover it. 00:34:16.128 [2024-07-25 00:14:21.788586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.128 [2024-07-25 00:14:21.788718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.788744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.788760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.788773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.788803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.129 [2024-07-25 00:14:21.798554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.129 [2024-07-25 00:14:21.798730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.798777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.798793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.798806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.798849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.129 [2024-07-25 00:14:21.808609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.129 [2024-07-25 00:14:21.808746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.808776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.808792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.808806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.808835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.129 [2024-07-25 00:14:21.818605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.129 [2024-07-25 00:14:21.818740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.818767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.818782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.818795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.818824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.129 [2024-07-25 00:14:21.828604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.129 [2024-07-25 00:14:21.828772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.828799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.828815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.828828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.828858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.129 [2024-07-25 00:14:21.838646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.129 [2024-07-25 00:14:21.838798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.129 [2024-07-25 00:14:21.838829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.129 [2024-07-25 00:14:21.838845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.129 [2024-07-25 00:14:21.838858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.129 [2024-07-25 00:14:21.838894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.129 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-25 00:14:21.848699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.388 [2024-07-25 00:14:21.848843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.388 [2024-07-25 00:14:21.848872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.388 [2024-07-25 00:14:21.848888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.388 [2024-07-25 00:14:21.848901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.388 [2024-07-25 00:14:21.848931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-25 00:14:21.858706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.388 [2024-07-25 00:14:21.858878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.388 [2024-07-25 00:14:21.858906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.388 [2024-07-25 00:14:21.858922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.388 [2024-07-25 00:14:21.858936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.388 [2024-07-25 00:14:21.858965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-25 00:14:21.868821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.388 [2024-07-25 00:14:21.868980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.388 [2024-07-25 00:14:21.869007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.388 [2024-07-25 00:14:21.869023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.388 [2024-07-25 00:14:21.869036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.388 [2024-07-25 00:14:21.869065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-25 00:14:21.878765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.388 [2024-07-25 00:14:21.878897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.388 [2024-07-25 00:14:21.878924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.388 [2024-07-25 00:14:21.878939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.388 [2024-07-25 00:14:21.878952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.388 [2024-07-25 00:14:21.878980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-25 00:14:21.888890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.388 [2024-07-25 00:14:21.889029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.388 [2024-07-25 00:14:21.889061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.889078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.889091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.889128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.898870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.899009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.899035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.899051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.899065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.899093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.908894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.909045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.909072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.909088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.909108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.909139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.918906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.919038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.919064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.919079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.919093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.919128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.928978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.929123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.929147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.929162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.929180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.929210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.939046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.939181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.939208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.939223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.939237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.939266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.948987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.949124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.949149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.949164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.949177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.949207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.959086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.959234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.959261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.959276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.959290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.959319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.969062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.969207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.969233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.969249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.969265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.969294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.979059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.979211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.979238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.979253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.979266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.979294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.989170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.989301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.989328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.989343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.989358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.989386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:21.999129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:21.999255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:21.999281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:21.999296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:21.999310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:21.999340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:22.009181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.389 [2024-07-25 00:14:22.009320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.389 [2024-07-25 00:14:22.009345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.389 [2024-07-25 00:14:22.009360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.389 [2024-07-25 00:14:22.009374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.389 [2024-07-25 00:14:22.009403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-25 00:14:22.019219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.019382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.019412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.019428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.019462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.019492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.029226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.029388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.029414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.029429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.029442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.029469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.039253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.039381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.039407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.039422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.039436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.039464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.049309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.049438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.049464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.049480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.049493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.049522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.059403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.059534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.059560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.059575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.059589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.059617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.069327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.069491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.069517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.069532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.069547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.069576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.079359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.079489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.079515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.079530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.079544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.079573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.089422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.089597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.089623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.089638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.089651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.089681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-25 00:14:22.099521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.390 [2024-07-25 00:14:22.099673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.390 [2024-07-25 00:14:22.099710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.390 [2024-07-25 00:14:22.099739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.390 [2024-07-25 00:14:22.099763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.390 [2024-07-25 00:14:22.099804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.649 [2024-07-25 00:14:22.109465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.649 [2024-07-25 00:14:22.109602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.649 [2024-07-25 00:14:22.109631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.649 [2024-07-25 00:14:22.109647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.649 [2024-07-25 00:14:22.109667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.649 [2024-07-25 00:14:22.109697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.649 qpair failed and we were unable to recover it. 00:34:16.649 [2024-07-25 00:14:22.119502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.649 [2024-07-25 00:14:22.119634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.649 [2024-07-25 00:14:22.119661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.649 [2024-07-25 00:14:22.119677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.649 [2024-07-25 00:14:22.119691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.649 [2024-07-25 00:14:22.119720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.649 qpair failed and we were unable to recover it. 00:34:16.649 [2024-07-25 00:14:22.129588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.649 [2024-07-25 00:14:22.129752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.649 [2024-07-25 00:14:22.129779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.649 [2024-07-25 00:14:22.129794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.649 [2024-07-25 00:14:22.129808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.649 [2024-07-25 00:14:22.129837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.649 qpair failed and we were unable to recover it. 00:34:16.649 [2024-07-25 00:14:22.139551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.649 [2024-07-25 00:14:22.139731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.649 [2024-07-25 00:14:22.139758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.649 [2024-07-25 00:14:22.139773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.649 [2024-07-25 00:14:22.139786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.649 [2024-07-25 00:14:22.139815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.649 qpair failed and we were unable to recover it. 00:34:16.649 [2024-07-25 00:14:22.149613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.649 [2024-07-25 00:14:22.149749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.649 [2024-07-25 00:14:22.149776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.649 [2024-07-25 00:14:22.149792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.149805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.149833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.159620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.159783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.159812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.159831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.159845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.159874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.169625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.169765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.169792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.169808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.169822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.169851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.179641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.179788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.179815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.179830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.179843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.179872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.189671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.189806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.189833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.189849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.189862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.189890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.199668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.199830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.199857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.199878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.199892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.199921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.209838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.209977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.210004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.210019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.210032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.210061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.219779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.219925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.219952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.219966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.219979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.220008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.229784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.229917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.229944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.229960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.229974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.230002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.239795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.239919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.239946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.239962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.239976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.240004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.249861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.250004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.250031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.250046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.250059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.250088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.259928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.260080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.260115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.260133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.260146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.260175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.269986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.270122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.270149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.650 [2024-07-25 00:14:22.270164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.650 [2024-07-25 00:14:22.270178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.650 [2024-07-25 00:14:22.270207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.650 qpair failed and we were unable to recover it. 00:34:16.650 [2024-07-25 00:14:22.279969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.650 [2024-07-25 00:14:22.280109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.650 [2024-07-25 00:14:22.280145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.280159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.280173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.280203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.289978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.290127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.290153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.290173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.290187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.290218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.300006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.300182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.300209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.300225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.300238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.300267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.309988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.310126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.310151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.310166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.310179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.310208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.320031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.320170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.320206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.320222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.320236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.320266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.330136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.330290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.330317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.330332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.330347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.330376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.340087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.340224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.340250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.340265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.340279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.340307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.350213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.350340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.350367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.350383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.350396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.350425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.651 [2024-07-25 00:14:22.360184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.651 [2024-07-25 00:14:22.360313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.651 [2024-07-25 00:14:22.360339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.651 [2024-07-25 00:14:22.360355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.651 [2024-07-25 00:14:22.360370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.651 [2024-07-25 00:14:22.360401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.651 qpair failed and we were unable to recover it. 00:34:16.910 [2024-07-25 00:14:22.370215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.910 [2024-07-25 00:14:22.370376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.910 [2024-07-25 00:14:22.370415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.910 [2024-07-25 00:14:22.370432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.910 [2024-07-25 00:14:22.370446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.910 [2024-07-25 00:14:22.370475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.910 qpair failed and we were unable to recover it. 00:34:16.910 [2024-07-25 00:14:22.380311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.910 [2024-07-25 00:14:22.380457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.910 [2024-07-25 00:14:22.380484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.910 [2024-07-25 00:14:22.380509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.910 [2024-07-25 00:14:22.380523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.910 [2024-07-25 00:14:22.380568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.910 qpair failed and we were unable to recover it. 00:34:16.910 [2024-07-25 00:14:22.390252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.910 [2024-07-25 00:14:22.390425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.910 [2024-07-25 00:14:22.390451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.910 [2024-07-25 00:14:22.390467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.910 [2024-07-25 00:14:22.390480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.910 [2024-07-25 00:14:22.390510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.910 qpair failed and we were unable to recover it. 00:34:16.910 [2024-07-25 00:14:22.400275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.910 [2024-07-25 00:14:22.400410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.910 [2024-07-25 00:14:22.400436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.910 [2024-07-25 00:14:22.400451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.910 [2024-07-25 00:14:22.400464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.910 [2024-07-25 00:14:22.400494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.410375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.410537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.410563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.410578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.410591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.410620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.420318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.420450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.420476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.420491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.420503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.420531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.430356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.430488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.430515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.430530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.430543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.430571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.440405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.440541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.440566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.440581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.440596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.440624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.450418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.450553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.450580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.450594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.450607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.450636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.460435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.460564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.460588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.460602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.460615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.460642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.470595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.470725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.470756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.470772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.470786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.470816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.480485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.480609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.480634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.480649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.480662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.480689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.490530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.490680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.490705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.490720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.490733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.490760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.500565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.500753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.500779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.500793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.500806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.500833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.510655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.510812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.510838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.510856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.510869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.510900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.520635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.520767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.520793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.911 [2024-07-25 00:14:22.520807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.911 [2024-07-25 00:14:22.520820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.911 [2024-07-25 00:14:22.520847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.911 qpair failed and we were unable to recover it. 00:34:16.911 [2024-07-25 00:14:22.530713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.911 [2024-07-25 00:14:22.530881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.911 [2024-07-25 00:14:22.530907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.530921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.530934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.530961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.540676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.540810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.540836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.540849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.540863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.540890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.550739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.550880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.550904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.550918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.550931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.550958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.560735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.560860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.560890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.560905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.560919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.560947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.570751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.570898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.570923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.570937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.570950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.570976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.580815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.580988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.581013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.581027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.581040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.581067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.590790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.590943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.590968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.590982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.590995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.591022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.600876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.601009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.601033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.601048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.601060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.601093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.610855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.610989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.611014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.611028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.611041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.611069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:16.912 [2024-07-25 00:14:22.620883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.912 [2024-07-25 00:14:22.621015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.912 [2024-07-25 00:14:22.621040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.912 [2024-07-25 00:14:22.621054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.912 [2024-07-25 00:14:22.621066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:16.912 [2024-07-25 00:14:22.621095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.912 qpair failed and we were unable to recover it. 00:34:17.171 [2024-07-25 00:14:22.630907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.171 [2024-07-25 00:14:22.631066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.171 [2024-07-25 00:14:22.631109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.171 [2024-07-25 00:14:22.631143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.171 [2024-07-25 00:14:22.631160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.171 [2024-07-25 00:14:22.631191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.171 qpair failed and we were unable to recover it. 00:34:17.171 [2024-07-25 00:14:22.640954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.171 [2024-07-25 00:14:22.641127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.171 [2024-07-25 00:14:22.641153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.171 [2024-07-25 00:14:22.641167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.171 [2024-07-25 00:14:22.641181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.171 [2024-07-25 00:14:22.641208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.171 qpair failed and we were unable to recover it. 00:34:17.171 [2024-07-25 00:14:22.650980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.171 [2024-07-25 00:14:22.651134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.171 [2024-07-25 00:14:22.651165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.171 [2024-07-25 00:14:22.651180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.171 [2024-07-25 00:14:22.651193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.171 [2024-07-25 00:14:22.651221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.171 qpair failed and we were unable to recover it. 00:34:17.171 [2024-07-25 00:14:22.661017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.171 [2024-07-25 00:14:22.661151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.661177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.661191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.661204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.661232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.671039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.671179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.671205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.671219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.671232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.671260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.681112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.681265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.681291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.681304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.681317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.681345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.691091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.691233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.691258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.691272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.691285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.691318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.701139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.701283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.701308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.701322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.701335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.701363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.711172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.711306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.711332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.711346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.711359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.711386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.721184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.721315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.721340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.721355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.721367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.721395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.731248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.731381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.731406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.731420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.731433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.731460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.741278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.741412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.741441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.741456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.741469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.741497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.751269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.751449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.751474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.751488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.751500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.751528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.761354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.761495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.761519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.761533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.761546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.761573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.771365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.771543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.771568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.771582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.771594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.771621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.172 [2024-07-25 00:14:22.781384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.172 [2024-07-25 00:14:22.781514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.172 [2024-07-25 00:14:22.781540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.172 [2024-07-25 00:14:22.781554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.172 [2024-07-25 00:14:22.781566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.172 [2024-07-25 00:14:22.781599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.172 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.791374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.791501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.791526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.791540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.791553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.791580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.801522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.801677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.801703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.801718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.801734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.801764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.811442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.811618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.811643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.811658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.811670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.811698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.821594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.821760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.821785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.821799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.821812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.821839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.831525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.831654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.831684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.831699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.831712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.831739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.841557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.841685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.841710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.841723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.841736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.841764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.851647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.851789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.851814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.851828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.851841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.851868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.861686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.861824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.861852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.861866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.861879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.861907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.871698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.871835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.871861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.871880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.871899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.871929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.173 [2024-07-25 00:14:22.881639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.173 [2024-07-25 00:14:22.881772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.173 [2024-07-25 00:14:22.881798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.173 [2024-07-25 00:14:22.881813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.173 [2024-07-25 00:14:22.881825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.173 [2024-07-25 00:14:22.881853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.173 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-25 00:14:22.891670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.432 [2024-07-25 00:14:22.891805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.432 [2024-07-25 00:14:22.891833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.432 [2024-07-25 00:14:22.891847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.432 [2024-07-25 00:14:22.891860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.432 [2024-07-25 00:14:22.891889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-25 00:14:22.901694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.432 [2024-07-25 00:14:22.901832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.432 [2024-07-25 00:14:22.901858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.432 [2024-07-25 00:14:22.901872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.432 [2024-07-25 00:14:22.901885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.432 [2024-07-25 00:14:22.901913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-25 00:14:22.911746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.432 [2024-07-25 00:14:22.911925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.432 [2024-07-25 00:14:22.911950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.432 [2024-07-25 00:14:22.911965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.911978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.912005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.921741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.921876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.921902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.921915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.921928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.921956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.931805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.931977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.932002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.932016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.932028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.932055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.941808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.941949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.941974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.941988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.942001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.942028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.951856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.951992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.952018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.952031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.952044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.952072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.961907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.962045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.962070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.962084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.962109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.962140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.971892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.972029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.972054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.972068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.972081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.972116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.981947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.982086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.982125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.982143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.982157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.982186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:22.991963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:22.992090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:22.992121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:22.992136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:22.992149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:22.992178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:23.001996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:23.002147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:23.002174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:23.002188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:23.002201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:23.002228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:23.012011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:23.012162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:23.012188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:23.012203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:23.012215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:23.012244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:23.022040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:23.022192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:23.022218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:23.022232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:23.022245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:23.022274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:23.032059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:23.032198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.433 [2024-07-25 00:14:23.032225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.433 [2024-07-25 00:14:23.032239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.433 [2024-07-25 00:14:23.032252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.433 [2024-07-25 00:14:23.032280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-25 00:14:23.042110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.433 [2024-07-25 00:14:23.042239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.042264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.042278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.042291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.042319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.052146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.052282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.052307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.052321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.052339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.052367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.062154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.062290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.062315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.062329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.062342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.062369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.072185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.072353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.072379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.072392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.072406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.072435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.082198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.082346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.082371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.082385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.082398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.082426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.092269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.092445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.092470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.092484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.092496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.092524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.102245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.102382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.102407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.102421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.102434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.102462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.112295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.112423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.112449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.112463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.112475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.112503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.122351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.122479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.122504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.122517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.122530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.122558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.132358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.132495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.132520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.132534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.132547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.132574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-25 00:14:23.142371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.434 [2024-07-25 00:14:23.142507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.434 [2024-07-25 00:14:23.142532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.434 [2024-07-25 00:14:23.142555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.434 [2024-07-25 00:14:23.142569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.434 [2024-07-25 00:14:23.142597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.693 [2024-07-25 00:14:23.152431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.693 [2024-07-25 00:14:23.152601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.693 [2024-07-25 00:14:23.152629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.693 [2024-07-25 00:14:23.152644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.152657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.152686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.162428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.162553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.162580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.162595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.162608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.162637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.172460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.172634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.172660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.172674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.172687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.172714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.182475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.182618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.182653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.182667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.182679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.182707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.192551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.192705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.192731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.192744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.192757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.192785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.202536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.202663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.202688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.202702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.202715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.202743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.212585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.212748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.212776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.212791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.212805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.212834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.222604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.222738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.222764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.222778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.222791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.222820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.232640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.232773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.232799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.232819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.232834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.232864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.242691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.242821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.242846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.242861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.242874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.242902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.252711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.252864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.252889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.252902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.252915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.252942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.262728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.262857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.262882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.262896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.262909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.262937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.272770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.694 [2024-07-25 00:14:23.272901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.694 [2024-07-25 00:14:23.272927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.694 [2024-07-25 00:14:23.272941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.694 [2024-07-25 00:14:23.272954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.694 [2024-07-25 00:14:23.272982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.694 qpair failed and we were unable to recover it. 00:34:17.694 [2024-07-25 00:14:23.282764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.282892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.282918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.282931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.282944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.282972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.292826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.292964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.292989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.293003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.293016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.293044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.302864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.303009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.303034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.303048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.303060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.303088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.312880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.313007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.313033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.313047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.313060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.313087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.322934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.323061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.323086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.323114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.323129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.323157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.332958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.333096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.333129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.333143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.333156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.333186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.342995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.343157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.343182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.343196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.343209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.343237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.353063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.353199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.353224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.353238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.353251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.353278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.363013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.363153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.363178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.363192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.363205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.363235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.373074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.373266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.373292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.373305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.373318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.373346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.383060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.383199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.383224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.383238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.383251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.383278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.393133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.393301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.393326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.393340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.393353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.393381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.695 [2024-07-25 00:14:23.403159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.695 [2024-07-25 00:14:23.403292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.695 [2024-07-25 00:14:23.403318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.695 [2024-07-25 00:14:23.403332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.695 [2024-07-25 00:14:23.403345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.695 [2024-07-25 00:14:23.403372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.695 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.413192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.413332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.413365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.413381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.413394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.413423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.423174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.423339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.423366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.423380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.423392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.423421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.433287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.433414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.433440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.433454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.433467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.433495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.443245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.443381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.443407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.443426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.443439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.443468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.453267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.453403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.453429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.453443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.453455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.453484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.463314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.463450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.463474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.463488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.463500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.463527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.473353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.473488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.473513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.473528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.473540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.473567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.483385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.483523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.483549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.483563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.483576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.483603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.493410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.493545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.493570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.493585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.493598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.493626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.503452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.503589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.503619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.503634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.503647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.503675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.513451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.513595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.513621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.513635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.513648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.513678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.523488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.523637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.523663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.523677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.523690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.523718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.533584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.533736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.533763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.533777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.533789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.533819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.543516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.543652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.543678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.543692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.543705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.543739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.553566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.553734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.553760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.553774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.553787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.553815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.563567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.563701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.563726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.563740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.954 [2024-07-25 00:14:23.563753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.954 [2024-07-25 00:14:23.563781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.954 qpair failed and we were unable to recover it. 00:34:17.954 [2024-07-25 00:14:23.573612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.954 [2024-07-25 00:14:23.573748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.954 [2024-07-25 00:14:23.573773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.954 [2024-07-25 00:14:23.573787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.573799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.573827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.583651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.583780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.583806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.583819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.583833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.583861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.593648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.593828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.593859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.593876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.593889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.593918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.603708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.603881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.603907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.603921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.603934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.603963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.613735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.613921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.613946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.613960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.613973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.614000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.623762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.623902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.623929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.623950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.623963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.623994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.633792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.633960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.633987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.634001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.634013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.634047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.643810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.643945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.643971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.643985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.643997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.644027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.653871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.654021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.654046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.654060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.654073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.654100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:17.955 [2024-07-25 00:14:23.663908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.955 [2024-07-25 00:14:23.664064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.955 [2024-07-25 00:14:23.664090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.955 [2024-07-25 00:14:23.664111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.955 [2024-07-25 00:14:23.664125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:17.955 [2024-07-25 00:14:23.664153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.955 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.673896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.674028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.674057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.674071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.674084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.674120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.683937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.684069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.684100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.684124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.684138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.684167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.693939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.694074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.694107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.694123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.694136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.694164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.703965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.704099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.704133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.704147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.704160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.704190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.713983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.714117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.714143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.714157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.714170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.714198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.724012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.724187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.724213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.724227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.724240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.724275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.734173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.734311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.734337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.734351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.734364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.734392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.744110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.744245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.744270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.744284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.744297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.744325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.754093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.754228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.754253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.754267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.754280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.754307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.764114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.764257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.764283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.214 [2024-07-25 00:14:23.764297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.214 [2024-07-25 00:14:23.764309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.214 [2024-07-25 00:14:23.764336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.214 qpair failed and we were unable to recover it. 00:34:18.214 [2024-07-25 00:14:23.774167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.214 [2024-07-25 00:14:23.774305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.214 [2024-07-25 00:14:23.774335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.774350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.774363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.774391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.784199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.784332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.784358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.784372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.784385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.784413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.794216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.794347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.794373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.794386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.794399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.794427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.804259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.804397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.804424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.804438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.804451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.804478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.814290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.814428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.814454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.814467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.814488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.814516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.824305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.824433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.824458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.824472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.824485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.824512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.834331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.834463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.834489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.834509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.834522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.834551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.844379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.844507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.844533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.844547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.844560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.844587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.854396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.854535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.854561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.854574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.854587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.854615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.864444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.864618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.864643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.864657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.864670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.864698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.874461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.874643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.874669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.874682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.874695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.874722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.884475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.884606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.884631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.884645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.884657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.884685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.894518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.894667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.215 [2024-07-25 00:14:23.894692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.215 [2024-07-25 00:14:23.894706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.215 [2024-07-25 00:14:23.894718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.215 [2024-07-25 00:14:23.894745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.215 qpair failed and we were unable to recover it. 00:34:18.215 [2024-07-25 00:14:23.904524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.215 [2024-07-25 00:14:23.904656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.216 [2024-07-25 00:14:23.904680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.216 [2024-07-25 00:14:23.904694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.216 [2024-07-25 00:14:23.904712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.216 [2024-07-25 00:14:23.904740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.216 qpair failed and we were unable to recover it. 00:34:18.216 [2024-07-25 00:14:23.914561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.216 [2024-07-25 00:14:23.914697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.216 [2024-07-25 00:14:23.914722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.216 [2024-07-25 00:14:23.914736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.216 [2024-07-25 00:14:23.914748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.216 [2024-07-25 00:14:23.914776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.216 qpair failed and we were unable to recover it. 00:34:18.216 [2024-07-25 00:14:23.924587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.216 [2024-07-25 00:14:23.924713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.216 [2024-07-25 00:14:23.924738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.216 [2024-07-25 00:14:23.924752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.216 [2024-07-25 00:14:23.924765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.216 [2024-07-25 00:14:23.924792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.216 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.934659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.934800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.934828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.934843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.934855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.934884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.944664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.944802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.944828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.944842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.944855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.944883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.954675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.954810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.954836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.954849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.954862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.954889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.964722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.964855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.964880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.964894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.964906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.964934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.974751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.974885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.974911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.974924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.974937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.974964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.984868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.985003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.985028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.985043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.985055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.985082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:23.994825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:23.994951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:23.994976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:23.994990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:23.995008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:23.995036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:24.004811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:24.004941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:24.004966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:24.004980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:24.004993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:24.005020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-07-25 00:14:24.014872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.475 [2024-07-25 00:14:24.015030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.475 [2024-07-25 00:14:24.015057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.475 [2024-07-25 00:14:24.015071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.475 [2024-07-25 00:14:24.015084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.475 [2024-07-25 00:14:24.015120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.024892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.025033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.025059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.025072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.025085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.025119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.034916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.035060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.035086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.035100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.035121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.035150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.044921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.045058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.045084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.045098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.045118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.045149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.054970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.055125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.055151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.055165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.055178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.055206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.065009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.065151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.065177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.065190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.065203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.065231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.075059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.075218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.075248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.075263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.075275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.075304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.085054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.085202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.085229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.085249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.085263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.085291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.095117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.095259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.095284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.095298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.095311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.095339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.105156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.105287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.105313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.105327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.105339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.105366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.115143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.115280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.115305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.115319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.115331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.115359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.125184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.125311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.125337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.125351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.125364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.125391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.135325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.135469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.135494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.135508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.135521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.476 [2024-07-25 00:14:24.135551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-07-25 00:14:24.145255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.476 [2024-07-25 00:14:24.145385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.476 [2024-07-25 00:14:24.145410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.476 [2024-07-25 00:14:24.145424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.476 [2024-07-25 00:14:24.145437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.477 [2024-07-25 00:14:24.145465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-07-25 00:14:24.155321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.477 [2024-07-25 00:14:24.155480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.477 [2024-07-25 00:14:24.155505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.477 [2024-07-25 00:14:24.155518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.477 [2024-07-25 00:14:24.155532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.477 [2024-07-25 00:14:24.155559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-07-25 00:14:24.165286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.477 [2024-07-25 00:14:24.165431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.477 [2024-07-25 00:14:24.165456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.477 [2024-07-25 00:14:24.165471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.477 [2024-07-25 00:14:24.165484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.477 [2024-07-25 00:14:24.165510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-07-25 00:14:24.175330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.477 [2024-07-25 00:14:24.175467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.477 [2024-07-25 00:14:24.175493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.477 [2024-07-25 00:14:24.175516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.477 [2024-07-25 00:14:24.175530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.477 [2024-07-25 00:14:24.175560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-07-25 00:14:24.185350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.477 [2024-07-25 00:14:24.185499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.477 [2024-07-25 00:14:24.185524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.477 [2024-07-25 00:14:24.185538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.477 [2024-07-25 00:14:24.185551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.477 [2024-07-25 00:14:24.185579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.195392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.195538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.195575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.195603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.195628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.195673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.205455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.205586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.205613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.205627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.205640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.205670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.215450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.215635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.215662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.215675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.215688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.215715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.225453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.225586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.225613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.225626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.225639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.225666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.235524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.235662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.235687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.235701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.235714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.235741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.245512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.245646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.245672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.245687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.245699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.245727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.736 [2024-07-25 00:14:24.255564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.736 [2024-07-25 00:14:24.255746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.736 [2024-07-25 00:14:24.255771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.736 [2024-07-25 00:14:24.255785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.736 [2024-07-25 00:14:24.255797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.736 [2024-07-25 00:14:24.255825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.736 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.265583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.265712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.265737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.265758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.265771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.265801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.275608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.275748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.275774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.275789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.275803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.275830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.285672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.285842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.285867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.285881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.285894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.285922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.295708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.295839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.295865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.295879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.295893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.295921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.305727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.305855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.305880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.305894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.305908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.305935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.315773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.315904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.315932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.315949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.315962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.315990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.325834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.325982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.326008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.326022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.326035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.326063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.335850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.335987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.336013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.336027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.336040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.336067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.345864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.346009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.346034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.346048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.346061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.346088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.356015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.356156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.356182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.356202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.356216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.356245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.365900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.366026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.366052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.366066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.366079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.366116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.737 [2024-07-25 00:14:24.375941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.737 [2024-07-25 00:14:24.376080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.737 [2024-07-25 00:14:24.376112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.737 [2024-07-25 00:14:24.376128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.737 [2024-07-25 00:14:24.376142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.737 [2024-07-25 00:14:24.376170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.737 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.385975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.386121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.386147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.386161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.386175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.386202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.395988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.396139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.396165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.396178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.396191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.396220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.406022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.406168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.406194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.406208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.406221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.406250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.416057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.416202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.416228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.416242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.416255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.416283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.426089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.426254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.426279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.426292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.426305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.426333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.436118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.436266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.436292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.436306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.436319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.436347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.738 [2024-07-25 00:14:24.446149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.738 [2024-07-25 00:14:24.446281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.738 [2024-07-25 00:14:24.446312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.738 [2024-07-25 00:14:24.446327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.738 [2024-07-25 00:14:24.446339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.738 [2024-07-25 00:14:24.446367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.738 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.456170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.456310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.456339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.456353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.456366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.456395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.466301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.466444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.466471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.466484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.466496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.466524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.476241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.476373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.476400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.476414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.476427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.476455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.486290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.486462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.486488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.486502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.486514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.486547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.496278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.496429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.496455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.496468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.496481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.496508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.506332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.506467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.506496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.506511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.506524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.506553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.516349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.516525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.516551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.516565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.516578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.516606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.526388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.526567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.526593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.526607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.526619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.526648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.536442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.536595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.536626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.536641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.536653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.536681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.546424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.546554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.546579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.546593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.997 [2024-07-25 00:14:24.546605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.997 [2024-07-25 00:14:24.546633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 00:14:24.556455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.997 [2024-07-25 00:14:24.556627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.997 [2024-07-25 00:14:24.556653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.997 [2024-07-25 00:14:24.556667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.556680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.556707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.566449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.566578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.566603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.566617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.566630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.566657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.576530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.576667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.576693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.576707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.576719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.576752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.586516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.586682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.586707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.586721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.586733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.586761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.596576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.596714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.596739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.596753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.596765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.596793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.606641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.606777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.606802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.606816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.606829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.606856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.616610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.616743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.616768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.616781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.616794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.616822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.626653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.626797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.626830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.626851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.626864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.626892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.636682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.636821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.636847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.636861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.636875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4570 00:34:18.998 [2024-07-25 00:14:24.636902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.646725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.646862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.646896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.646912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.646924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda78000b90 00:34:18.998 [2024-07-25 00:14:24.646959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.656763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.656900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.656928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.656942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.656955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda78000b90 00:34:18.998 [2024-07-25 00:14:24.656986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.666778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.666931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.666963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.666979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.666992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda70000b90 00:34:18.998 [2024-07-25 00:14:24.667029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.676791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.998 [2024-07-25 00:14:24.676918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.998 [2024-07-25 00:14:24.676946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.998 [2024-07-25 00:14:24.676960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.998 [2024-07-25 00:14:24.676973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda70000b90 00:34:18.998 [2024-07-25 00:14:24.677004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 00:14:24.677108] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:18.998 A controller has encountered a failure and is being reset. 00:34:18.999 [2024-07-25 00:14:24.686920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.999 [2024-07-25 00:14:24.687057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.999 [2024-07-25 00:14:24.687089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.999 [2024-07-25 00:14:24.687115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.999 [2024-07-25 00:14:24.687133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda80000b90 00:34:18.999 [2024-07-25 00:14:24.687164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 00:14:24.696859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.999 [2024-07-25 00:14:24.696994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.999 [2024-07-25 00:14:24.697022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.999 [2024-07-25 00:14:24.697036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.999 [2024-07-25 00:14:24.697049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fda80000b90 00:34:18.999 [2024-07-25 00:14:24.697081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.999 qpair failed and we were unable to recover it. 00:34:19.257 Controller properly reset. 00:34:19.257 Initializing NVMe Controllers 00:34:19.257 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:19.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:19.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:19.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:19.257 Initialization complete. Launching workers. 00:34:19.257 Starting thread on core 1 00:34:19.257 Starting thread on core 2 00:34:19.257 Starting thread on core 3 00:34:19.257 Starting thread on core 0 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:19.257 00:34:19.257 real 0m10.812s 00:34:19.257 user 0m18.102s 00:34:19.257 sys 0m5.353s 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.257 ************************************ 00:34:19.257 END TEST nvmf_target_disconnect_tc2 00:34:19.257 ************************************ 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:19.257 rmmod nvme_tcp 00:34:19.257 rmmod nvme_fabrics 00:34:19.257 rmmod nvme_keyring 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 944393 ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 944393 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 944393 ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 944393 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 944393 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 944393' 00:34:19.257 killing process with pid 944393 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 944393 00:34:19.257 00:14:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 944393 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:19.515 00:14:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.047 00:14:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:22.047 00:34:22.047 real 0m15.535s 00:34:22.047 user 0m44.433s 00:34:22.047 sys 0m7.266s 00:34:22.047 00:14:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:22.047 00:14:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 ************************************ 00:34:22.047 END TEST nvmf_target_disconnect 00:34:22.047 ************************************ 00:34:22.047 00:14:27 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:22.047 00:14:27 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.047 00:14:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 00:14:27 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:22.047 00:34:22.047 real 27m0.168s 00:34:22.047 user 73m51.792s 00:34:22.047 sys 6m29.896s 00:34:22.047 00:14:27 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:22.047 00:14:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 ************************************ 00:34:22.047 END TEST nvmf_tcp 00:34:22.047 ************************************ 00:34:22.047 00:14:27 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:22.047 00:14:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:22.047 00:14:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:22.047 00:14:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:22.047 00:14:27 -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 ************************************ 00:34:22.047 START TEST spdkcli_nvmf_tcp 00:34:22.047 ************************************ 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:22.047 * Looking for test storage... 00:34:22.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=945591 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 945591 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 945591 ']' 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 [2024-07-25 00:14:27.372875] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:22.047 [2024-07-25 00:14:27.372981] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945591 ] 00:34:22.047 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.047 [2024-07-25 00:14:27.431604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:22.047 [2024-07-25 00:14:27.522122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.047 [2024-07-25 00:14:27.522131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 00:14:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:22.047 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:22.047 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:22.047 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:22.047 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:22.047 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:22.047 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:22.047 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:22.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.048 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.048 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:22.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:22.048 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:22.048 ' 00:34:24.576 [2024-07-25 00:14:30.174213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.948 [2024-07-25 00:14:31.394369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:28.470 [2024-07-25 00:14:33.649505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:30.363 [2024-07-25 00:14:35.579472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:31.733 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:31.733 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:31.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:31.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:31.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:31.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:31.733 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:31.733 00:14:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:31.990 00:14:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:31.990 00:14:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:31.990 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:31.990 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.990 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.248 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:32.248 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:32.248 00:14:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.248 00:14:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:32.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:32.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:32.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:32.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:32.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:32.248 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:32.248 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:32.248 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:32.248 ' 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:37.550 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:37.550 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:37.550 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:37.550 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 945591 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 945591 ']' 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 945591 00:34:37.550 00:14:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 945591 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 945591' 00:34:37.550 killing process with pid 945591 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 945591 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 945591 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 945591 ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 945591 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 945591 ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 945591 00:34:37.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (945591) - No such process 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 945591 is not found' 00:34:37.550 Process with pid 945591 is not found 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:37.550 00:34:37.550 real 0m15.985s 00:34:37.550 user 0m33.795s 00:34:37.550 sys 0m0.786s 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:37.550 00:14:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.550 ************************************ 00:34:37.550 END TEST spdkcli_nvmf_tcp 00:34:37.550 ************************************ 00:34:37.809 00:14:43 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:37.809 00:14:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:37.809 00:14:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:37.809 00:14:43 -- common/autotest_common.sh@10 -- # set +x 00:34:37.809 ************************************ 00:34:37.809 START TEST nvmf_identify_passthru 00:34:37.809 ************************************ 00:34:37.809 00:14:43 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:37.809 * Looking for test storage... 00:34:37.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.809 00:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.809 00:14:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.809 00:14:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.809 00:14:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.809 00:14:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.809 00:14:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.809 00:14:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.809 00:14:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:37.809 00:14:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:37.809 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:37.810 00:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.810 00:14:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.810 00:14:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.810 00:14:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.810 00:14:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.810 00:14:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.810 00:14:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.810 00:14:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:37.810 00:14:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.810 00:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.810 00:14:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:37.810 00:14:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:37.810 00:14:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:37.810 00:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:39.709 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:39.710 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:39.710 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:39.710 Found net devices under 0000:09:00.0: cvl_0_0 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:39.710 Found net devices under 0000:09:00.1: cvl_0_1 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:34:39.710 00:34:39.710 --- 10.0.0.2 ping statistics --- 00:34:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.710 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:34:39.710 00:34:39.710 --- 10.0.0.1 ping statistics --- 00:34:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.710 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:39.710 00:14:45 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:39.710 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.710 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:39.710 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:39.970 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:39.970 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:34:39.970 00:14:45 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:0b:00.0 00:34:39.970 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:39.970 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:39.970 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:39.970 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:39.970 00:14:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:39.970 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.154 00:14:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:44.154 00:14:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:44.154 00:14:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:44.154 00:14:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:44.154 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=950084 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 950084 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 950084 ']' 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 [2024-07-25 00:14:53.762216] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:48.337 [2024-07-25 00:14:53.762291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.337 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.337 [2024-07-25 00:14:53.825164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.337 [2024-07-25 00:14:53.912447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.337 [2024-07-25 00:14:53.912496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.337 [2024-07-25 00:14:53.912510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.337 [2024-07-25 00:14:53.912521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.337 [2024-07-25 00:14:53.912531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.337 [2024-07-25 00:14:53.912610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.337 [2024-07-25 00:14:53.912676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.337 [2024-07-25 00:14:53.912745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.337 [2024-07-25 00:14:53.912743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 INFO: Log level set to 20 00:34:48.337 INFO: Requests: 00:34:48.337 { 00:34:48.337 "jsonrpc": "2.0", 00:34:48.337 "method": "nvmf_set_config", 00:34:48.337 "id": 1, 00:34:48.337 "params": { 00:34:48.337 "admin_cmd_passthru": { 00:34:48.337 "identify_ctrlr": true 00:34:48.337 } 00:34:48.337 } 00:34:48.337 } 00:34:48.337 00:34:48.337 INFO: response: 00:34:48.337 { 00:34:48.337 "jsonrpc": "2.0", 00:34:48.337 "id": 1, 00:34:48.337 "result": true 00:34:48.337 } 00:34:48.337 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.337 00:14:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.337 00:14:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 INFO: Setting log level to 20 00:34:48.337 INFO: Setting log level to 20 00:34:48.337 INFO: Log level set to 20 00:34:48.337 INFO: Log level set to 20 00:34:48.337 INFO: Requests: 00:34:48.337 { 00:34:48.337 "jsonrpc": "2.0", 00:34:48.337 "method": "framework_start_init", 00:34:48.337 "id": 1 00:34:48.337 } 00:34:48.337 00:34:48.337 INFO: Requests: 00:34:48.337 { 00:34:48.337 "jsonrpc": "2.0", 00:34:48.337 "method": "framework_start_init", 00:34:48.337 "id": 1 00:34:48.337 } 00:34:48.337 00:34:48.596 [2024-07-25 00:14:54.078311] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:48.596 INFO: response: 00:34:48.596 { 00:34:48.596 "jsonrpc": "2.0", 00:34:48.596 "id": 1, 00:34:48.596 "result": true 00:34:48.596 } 00:34:48.596 00:34:48.596 INFO: response: 00:34:48.596 { 00:34:48.596 "jsonrpc": "2.0", 00:34:48.596 "id": 1, 00:34:48.596 "result": true 00:34:48.596 } 00:34:48.596 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.596 00:14:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.596 INFO: Setting log level to 40 00:34:48.596 INFO: Setting log level to 40 00:34:48.596 INFO: Setting log level to 40 00:34:48.596 [2024-07-25 00:14:54.088261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.596 00:14:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.596 00:14:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.596 00:14:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 Nvme0n1 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 [2024-07-25 00:14:56.972342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 [ 00:34:51.876 { 00:34:51.876 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:51.876 "subtype": "Discovery", 00:34:51.876 "listen_addresses": [], 00:34:51.876 "allow_any_host": true, 00:34:51.876 "hosts": [] 00:34:51.876 }, 00:34:51.876 { 00:34:51.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.876 "subtype": "NVMe", 00:34:51.876 "listen_addresses": [ 00:34:51.876 { 00:34:51.876 "trtype": "TCP", 00:34:51.876 "adrfam": "IPv4", 00:34:51.876 "traddr": "10.0.0.2", 00:34:51.876 "trsvcid": "4420" 00:34:51.876 } 00:34:51.876 ], 00:34:51.876 "allow_any_host": true, 00:34:51.876 "hosts": [], 00:34:51.876 "serial_number": "SPDK00000000000001", 00:34:51.876 "model_number": "SPDK bdev Controller", 00:34:51.876 "max_namespaces": 1, 00:34:51.876 "min_cntlid": 1, 00:34:51.876 "max_cntlid": 65519, 00:34:51.876 "namespaces": [ 00:34:51.876 { 00:34:51.876 "nsid": 1, 00:34:51.876 "bdev_name": "Nvme0n1", 00:34:51.876 "name": "Nvme0n1", 00:34:51.876 "nguid": "2AF07797529D46BC8AB11C475168B1B6", 00:34:51.876 "uuid": "2af07797-529d-46bc-8ab1-1c475168b1b6" 00:34:51.876 } 00:34:51.876 ] 00:34:51.876 } 00:34:51.876 ] 00:34:51.876 00:14:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:51.876 00:14:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:51.876 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:51.876 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:51.876 00:14:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:51.876 rmmod nvme_tcp 00:34:51.876 rmmod nvme_fabrics 00:34:51.876 rmmod nvme_keyring 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 950084 ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 950084 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 950084 ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 950084 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 950084 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 950084' 00:34:51.876 killing process with pid 950084 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 950084 00:34:51.876 00:14:57 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 950084 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:53.249 00:14:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.249 00:14:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.249 00:14:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.151 00:15:00 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:55.151 00:34:55.151 real 0m17.570s 00:34:55.151 user 0m25.886s 00:34:55.151 sys 0m2.202s 00:34:55.151 00:15:00 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:55.151 00:15:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.151 ************************************ 00:34:55.151 END TEST nvmf_identify_passthru 00:34:55.151 ************************************ 00:34:55.408 00:15:00 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:55.408 00:15:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:55.408 00:15:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:55.408 00:15:00 -- common/autotest_common.sh@10 -- # set +x 00:34:55.408 ************************************ 00:34:55.408 START TEST nvmf_dif 00:34:55.408 ************************************ 00:34:55.408 00:15:00 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:55.408 * Looking for test storage... 00:34:55.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.408 00:15:00 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.408 00:15:00 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.408 00:15:00 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.408 00:15:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.408 00:15:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.408 00:15:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.408 00:15:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:55.408 00:15:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:55.408 00:15:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.408 00:15:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.408 00:15:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:55.408 00:15:00 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:55.408 00:15:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:57.306 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:57.306 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:57.306 Found net devices under 0000:09:00.0: cvl_0_0 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:57.306 Found net devices under 0000:09:00.1: cvl_0_1 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.306 00:15:02 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.307 00:15:02 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:57.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:34:57.307 00:34:57.307 --- 10.0.0.2 ping statistics --- 00:34:57.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.307 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:34:57.307 00:34:57.307 --- 10.0.0.1 ping statistics --- 00:34:57.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.307 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:57.307 00:15:03 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.679 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:58.679 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:58.679 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:58.679 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:58.679 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:58.679 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:58.679 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:58.679 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:58.679 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:58.679 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:58.679 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:58.679 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:58.679 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:58.679 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:58.679 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:58.679 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:58.679 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:58.679 00:15:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:58.679 00:15:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=953323 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:58.679 00:15:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 953323 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 953323 ']' 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:58.679 00:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.679 [2024-07-25 00:15:04.352700] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:58.679 [2024-07-25 00:15:04.352779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.679 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.937 [2024-07-25 00:15:04.416611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.937 [2024-07-25 00:15:04.507049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.937 [2024-07-25 00:15:04.507116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.937 [2024-07-25 00:15:04.507133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.937 [2024-07-25 00:15:04.507145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.937 [2024-07-25 00:15:04.507155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.937 [2024-07-25 00:15:04.507199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.937 00:15:04 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:58.937 00:15:04 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:34:58.937 00:15:04 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:58.937 00:15:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:58.937 00:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.937 00:15:04 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.937 00:15:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:58.937 00:15:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 [2024-07-25 00:15:04.660142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.195 00:15:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 ************************************ 00:34:59.195 START TEST fio_dif_1_default 00:34:59.195 ************************************ 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 bdev_null0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.195 [2024-07-25 00:15:04.720487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:59.195 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:59.196 { 00:34:59.196 "params": { 00:34:59.196 "name": "Nvme$subsystem", 00:34:59.196 "trtype": "$TEST_TRANSPORT", 00:34:59.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.196 "adrfam": "ipv4", 00:34:59.196 "trsvcid": "$NVMF_PORT", 00:34:59.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.196 "hdgst": ${hdgst:-false}, 00:34:59.196 "ddgst": ${ddgst:-false} 00:34:59.196 }, 00:34:59.196 "method": "bdev_nvme_attach_controller" 00:34:59.196 } 00:34:59.196 EOF 00:34:59.196 )") 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:59.196 "params": { 00:34:59.196 "name": "Nvme0", 00:34:59.196 "trtype": "tcp", 00:34:59.196 "traddr": "10.0.0.2", 00:34:59.196 "adrfam": "ipv4", 00:34:59.196 "trsvcid": "4420", 00:34:59.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.196 "hdgst": false, 00:34:59.196 "ddgst": false 00:34:59.196 }, 00:34:59.196 "method": "bdev_nvme_attach_controller" 00:34:59.196 }' 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.196 00:15:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:59.454 fio-3.35 00:34:59.454 Starting 1 thread 00:34:59.454 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.688 00:35:11.688 filename0: (groupid=0, jobs=1): err= 0: pid=953552: Thu Jul 25 00:15:15 2024 00:35:11.688 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10028msec) 00:35:11.688 slat (nsec): min=4294, max=63382, avg=9458.68, stdev=4254.22 00:35:11.688 clat (usec): min=40838, max=45117, avg=41065.90, stdev=365.32 00:35:11.688 lat (usec): min=40845, max=45145, avg=41075.36, stdev=365.78 00:35:11.688 clat percentiles (usec): 00:35:11.688 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:11.688 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:11.688 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:11.688 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:35:11.688 | 99.99th=[45351] 00:35:11.688 bw ( KiB/s): min= 384, max= 416, per=99.66%, avg=388.80, stdev=11.72, samples=20 00:35:11.688 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:11.688 lat (msec) : 50=100.00% 00:35:11.688 cpu : usr=89.54%, sys=10.17%, ctx=15, majf=0, minf=260 00:35:11.688 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.688 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.688 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:11.688 00:35:11.688 Run status group 0 (all jobs): 00:35:11.688 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10028-10028msec 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:35:11.688 real 0m11.103s 00:35:11.688 user 0m10.040s 00:35:11.688 sys 0m1.284s 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 ************************************ 00:35:11.688 END TEST fio_dif_1_default 00:35:11.688 ************************************ 00:35:11.688 00:15:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:11.688 00:15:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:11.688 00:15:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 ************************************ 00:35:11.688 START TEST fio_dif_1_multi_subsystems 00:35:11.688 ************************************ 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 bdev_null0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 [2024-07-25 00:15:15.876891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 bdev_null1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.688 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:11.689 { 00:35:11.689 "params": { 00:35:11.689 "name": "Nvme$subsystem", 00:35:11.689 "trtype": "$TEST_TRANSPORT", 00:35:11.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.689 "adrfam": "ipv4", 00:35:11.689 "trsvcid": "$NVMF_PORT", 00:35:11.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.689 "hdgst": ${hdgst:-false}, 00:35:11.689 "ddgst": ${ddgst:-false} 00:35:11.689 }, 00:35:11.689 "method": "bdev_nvme_attach_controller" 00:35:11.689 } 00:35:11.689 EOF 00:35:11.689 )") 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:11.689 { 00:35:11.689 "params": { 00:35:11.689 "name": "Nvme$subsystem", 00:35:11.689 "trtype": "$TEST_TRANSPORT", 00:35:11.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.689 "adrfam": "ipv4", 00:35:11.689 "trsvcid": "$NVMF_PORT", 00:35:11.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.689 "hdgst": ${hdgst:-false}, 00:35:11.689 "ddgst": ${ddgst:-false} 00:35:11.689 }, 00:35:11.689 "method": "bdev_nvme_attach_controller" 00:35:11.689 } 00:35:11.689 EOF 00:35:11.689 )") 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:11.689 "params": { 00:35:11.689 "name": "Nvme0", 00:35:11.689 "trtype": "tcp", 00:35:11.689 "traddr": "10.0.0.2", 00:35:11.689 "adrfam": "ipv4", 00:35:11.689 "trsvcid": "4420", 00:35:11.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.689 "hdgst": false, 00:35:11.689 "ddgst": false 00:35:11.689 }, 00:35:11.689 "method": "bdev_nvme_attach_controller" 00:35:11.689 },{ 00:35:11.689 "params": { 00:35:11.689 "name": "Nvme1", 00:35:11.689 "trtype": "tcp", 00:35:11.689 "traddr": "10.0.0.2", 00:35:11.689 "adrfam": "ipv4", 00:35:11.689 "trsvcid": "4420", 00:35:11.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.689 "hdgst": false, 00:35:11.689 "ddgst": false 00:35:11.689 }, 00:35:11.689 "method": "bdev_nvme_attach_controller" 00:35:11.689 }' 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:11.689 00:15:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.689 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:11.689 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:11.689 fio-3.35 00:35:11.689 Starting 2 threads 00:35:11.689 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.657 00:35:21.657 filename0: (groupid=0, jobs=1): err= 0: pid=955457: Thu Jul 25 00:15:26 2024 00:35:21.657 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10026msec) 00:35:21.657 slat (nsec): min=6390, max=51766, avg=9915.34, stdev=4428.87 00:35:21.657 clat (usec): min=40856, max=43164, avg=41056.69, stdev=291.07 00:35:21.657 lat (usec): min=40863, max=43215, avg=41066.60, stdev=291.46 00:35:21.657 clat percentiles (usec): 00:35:21.657 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:21.657 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:21.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:21.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:21.657 | 99.99th=[43254] 00:35:21.657 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 00:35:21.657 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:21.657 lat (msec) : 50=100.00% 00:35:21.657 cpu : usr=94.63%, sys=5.08%, ctx=14, majf=0, minf=47 00:35:21.657 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.657 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.657 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:21.657 filename1: (groupid=0, jobs=1): err= 0: pid=955458: Thu Jul 25 00:15:26 2024 00:35:21.657 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:35:21.657 slat (nsec): min=6473, max=63530, avg=9863.16, stdev=5103.74 00:35:21.657 clat (usec): min=40798, max=42900, avg=41014.70, stdev=215.64 00:35:21.657 lat (usec): min=40805, max=42914, avg=41024.56, stdev=216.37 00:35:21.657 clat percentiles (usec): 00:35:21.657 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:21.657 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:21.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:21.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:21.657 | 99.99th=[42730] 00:35:21.657 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 00:35:21.657 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:21.657 lat (msec) : 50=100.00% 00:35:21.657 cpu : usr=94.19%, sys=5.50%, ctx=17, majf=0, minf=246 00:35:21.657 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.657 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.657 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:21.657 00:35:21.657 Run status group 0 (all jobs): 00:35:21.657 READ: bw=779KiB/s (797kB/s), 389KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10016-10026msec 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:35:21.657 real 0m11.335s 00:35:21.657 user 0m20.336s 00:35:21.657 sys 0m1.351s 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 ************************************ 00:35:21.657 END TEST fio_dif_1_multi_subsystems 00:35:21.657 ************************************ 00:35:21.657 00:15:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:21.657 00:15:27 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:21.657 00:15:27 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 ************************************ 00:35:21.657 START TEST fio_dif_rand_params 00:35:21.657 ************************************ 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 bdev_null0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.657 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.658 [2024-07-25 00:15:27.263992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:21.658 { 00:35:21.658 "params": { 00:35:21.658 "name": "Nvme$subsystem", 00:35:21.658 "trtype": "$TEST_TRANSPORT", 00:35:21.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.658 "adrfam": "ipv4", 00:35:21.658 "trsvcid": "$NVMF_PORT", 00:35:21.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.658 "hdgst": ${hdgst:-false}, 00:35:21.658 "ddgst": ${ddgst:-false} 00:35:21.658 }, 00:35:21.658 "method": "bdev_nvme_attach_controller" 00:35:21.658 } 00:35:21.658 EOF 00:35:21.658 )") 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:21.658 "params": { 00:35:21.658 "name": "Nvme0", 00:35:21.658 "trtype": "tcp", 00:35:21.658 "traddr": "10.0.0.2", 00:35:21.658 "adrfam": "ipv4", 00:35:21.658 "trsvcid": "4420", 00:35:21.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.658 "hdgst": false, 00:35:21.658 "ddgst": false 00:35:21.658 }, 00:35:21.658 "method": "bdev_nvme_attach_controller" 00:35:21.658 }' 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:21.658 00:15:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.916 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:21.916 ... 00:35:21.916 fio-3.35 00:35:21.916 Starting 3 threads 00:35:21.916 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.465 00:35:28.465 filename0: (groupid=0, jobs=1): err= 0: pid=956864: Thu Jul 25 00:15:33 2024 00:35:28.465 read: IOPS=121, BW=15.2MiB/s (16.0MB/s)(76.9MiB/5041msec) 00:35:28.465 slat (nsec): min=6342, max=76783, avg=14229.11, stdev=5886.62 00:35:28.465 clat (usec): min=7827, max=64555, avg=24568.90, stdev=18306.78 00:35:28.465 lat (usec): min=7845, max=64632, avg=24583.13, stdev=18306.68 00:35:28.465 clat percentiles (usec): 00:35:28.465 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11994], 00:35:28.465 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14484], 60.00th=[15270], 00:35:28.465 | 70.00th=[16909], 80.00th=[52691], 90.00th=[54264], 95.00th=[55837], 00:35:28.465 | 99.00th=[58459], 99.50th=[59507], 99.90th=[64750], 99.95th=[64750], 00:35:28.465 | 99.99th=[64750] 00:35:28.465 bw ( KiB/s): min=12544, max=19712, per=27.66%, avg=15667.20, stdev=2147.96, samples=10 00:35:28.465 iops : min= 98, max= 154, avg=122.40, stdev=16.78, samples=10 00:35:28.465 lat (msec) : 10=6.34%, 20=65.85%, 50=2.44%, 100=25.37% 00:35:28.465 cpu : usr=93.37%, sys=6.21%, ctx=9, majf=0, minf=108 00:35:28.465 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 issued rwts: total=615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.465 filename0: (groupid=0, jobs=1): err= 0: pid=956865: Thu Jul 25 00:15:33 2024 00:35:28.465 read: IOPS=168, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:35:28.465 slat (nsec): min=7302, max=64219, avg=17641.86, stdev=7358.22 00:35:28.465 clat (usec): min=5622, max=96348, avg=17817.78, stdev=14928.77 00:35:28.465 lat (usec): min=5636, max=96362, avg=17835.42, stdev=14928.97 00:35:28.465 clat percentiles (usec): 00:35:28.465 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 8160], 00:35:28.465 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[15401], 00:35:28.465 | 70.00th=[19006], 80.00th=[23200], 90.00th=[49546], 95.00th=[55313], 00:35:28.465 | 99.00th=[63701], 99.50th=[68682], 99.90th=[95945], 99.95th=[95945], 00:35:28.465 | 99.99th=[95945] 00:35:28.465 bw ( KiB/s): min=15104, max=24576, per=37.26%, avg=21105.78, stdev=2938.74, samples=9 00:35:28.465 iops : min= 118, max= 192, avg=164.89, stdev=22.96, samples=9 00:35:28.465 lat (msec) : 10=42.57%, 20=29.73%, 50=18.43%, 100=9.27% 00:35:28.465 cpu : usr=93.36%, sys=6.00%, ctx=37, majf=0, minf=125 00:35:28.465 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 issued rwts: total=841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.465 filename0: (groupid=0, jobs=1): err= 0: pid=956866: Thu Jul 25 00:15:33 2024 00:35:28.465 read: IOPS=154, BW=19.3MiB/s (20.2MB/s)(96.9MiB/5022msec) 00:35:28.465 slat (nsec): min=7188, max=45846, avg=14405.37, stdev=5072.41 00:35:28.465 clat (usec): min=6255, max=97348, avg=19417.18, stdev=15109.38 00:35:28.465 lat (usec): min=6268, max=97380, avg=19431.58, stdev=15109.37 00:35:28.465 clat percentiles (usec): 00:35:28.465 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 9110], 00:35:28.465 | 30.00th=[10814], 40.00th=[11863], 50.00th=[13435], 60.00th=[17171], 00:35:28.465 | 70.00th=[20055], 80.00th=[25035], 90.00th=[50594], 95.00th=[57410], 00:35:28.465 | 99.00th=[66323], 99.50th=[67634], 99.90th=[96994], 99.95th=[96994], 00:35:28.465 | 99.99th=[96994] 00:35:28.465 bw ( KiB/s): min=15616, max=25856, per=34.89%, avg=19767.40, stdev=3448.56, samples=10 00:35:28.465 iops : min= 122, max= 202, avg=154.40, stdev=26.93, samples=10 00:35:28.465 lat (msec) : 10=22.84%, 20=47.35%, 50=19.74%, 100=10.06% 00:35:28.465 cpu : usr=92.75%, sys=6.81%, ctx=13, majf=0, minf=99 00:35:28.465 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.465 issued rwts: total=775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.465 00:35:28.465 Run status group 0 (all jobs): 00:35:28.465 READ: bw=55.3MiB/s (58.0MB/s), 15.2MiB/s-21.0MiB/s (16.0MB/s-22.0MB/s), io=279MiB (292MB), run=5002-5041msec 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 bdev_null0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 [2024-07-25 00:15:33.418059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 bdev_null1 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.465 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 bdev_null2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:28.466 { 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme$subsystem", 00:35:28.466 "trtype": "$TEST_TRANSPORT", 00:35:28.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "$NVMF_PORT", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.466 "hdgst": ${hdgst:-false}, 00:35:28.466 "ddgst": ${ddgst:-false} 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 } 00:35:28.466 EOF 00:35:28.466 )") 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:28.466 { 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme$subsystem", 00:35:28.466 "trtype": "$TEST_TRANSPORT", 00:35:28.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "$NVMF_PORT", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.466 "hdgst": ${hdgst:-false}, 00:35:28.466 "ddgst": ${ddgst:-false} 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 } 00:35:28.466 EOF 00:35:28.466 )") 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:28.466 { 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme$subsystem", 00:35:28.466 "trtype": "$TEST_TRANSPORT", 00:35:28.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "$NVMF_PORT", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.466 "hdgst": ${hdgst:-false}, 00:35:28.466 "ddgst": ${ddgst:-false} 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 } 00:35:28.466 EOF 00:35:28.466 )") 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme0", 00:35:28.466 "trtype": "tcp", 00:35:28.466 "traddr": "10.0.0.2", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "4420", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:28.466 "hdgst": false, 00:35:28.466 "ddgst": false 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 },{ 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme1", 00:35:28.466 "trtype": "tcp", 00:35:28.466 "traddr": "10.0.0.2", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "4420", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:28.466 "hdgst": false, 00:35:28.466 "ddgst": false 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 },{ 00:35:28.466 "params": { 00:35:28.466 "name": "Nvme2", 00:35:28.466 "trtype": "tcp", 00:35:28.466 "traddr": "10.0.0.2", 00:35:28.466 "adrfam": "ipv4", 00:35:28.466 "trsvcid": "4420", 00:35:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:28.466 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:28.466 "hdgst": false, 00:35:28.466 "ddgst": false 00:35:28.466 }, 00:35:28.466 "method": "bdev_nvme_attach_controller" 00:35:28.466 }' 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:28.466 00:15:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.466 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:28.466 ... 00:35:28.466 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:28.466 ... 00:35:28.466 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:28.466 ... 00:35:28.466 fio-3.35 00:35:28.466 Starting 24 threads 00:35:28.466 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.667 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957718: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=473, BW=1896KiB/s (1941kB/s)(18.6MiB/10026msec) 00:35:40.668 slat (usec): min=5, max=100, avg=33.32, stdev= 9.64 00:35:40.668 clat (usec): min=13800, max=45043, avg=33459.38, stdev=1775.83 00:35:40.668 lat (usec): min=13836, max=45068, avg=33492.70, stdev=1776.07 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[29230], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.668 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.668 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:40.668 | 99.99th=[44827] 00:35:40.668 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.20, stdev=52.44, samples=20 00:35:40.668 iops : min= 448, max= 480, avg=473.55, stdev=13.11, samples=20 00:35:40.668 lat (msec) : 20=0.34%, 50=99.66% 00:35:40.668 cpu : usr=89.55%, sys=5.07%, ctx=440, majf=0, minf=39 00:35:40.668 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957719: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10016msec) 00:35:40.668 slat (usec): min=8, max=108, avg=31.12, stdev=19.85 00:35:40.668 clat (usec): min=15812, max=65145, avg=33671.25, stdev=3730.80 00:35:40.668 lat (usec): min=15846, max=65186, avg=33702.37, stdev=3729.71 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[19530], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:35:40.668 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:40.668 | 99.00th=[47449], 99.50th=[47973], 99.90th=[65274], 99.95th=[65274], 00:35:40.668 | 99.99th=[65274] 00:35:40.668 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1880.45, stdev=72.66, samples=20 00:35:40.668 iops : min= 416, max= 480, avg=470.10, stdev=18.16, samples=20 00:35:40.668 lat (msec) : 20=1.36%, 50=98.26%, 100=0.38% 00:35:40.668 cpu : usr=91.74%, sys=4.18%, ctx=520, majf=0, minf=43 00:35:40.668 IO depths : 1=4.8%, 2=10.9%, 4=24.5%, 8=52.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957720: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10005msec) 00:35:40.668 slat (usec): min=5, max=169, avg=18.11, stdev=16.39 00:35:40.668 clat (usec): min=5382, max=43072, avg=33471.87, stdev=2522.19 00:35:40.668 lat (usec): min=5421, max=43094, avg=33489.98, stdev=2519.61 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[21365], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:40.668 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.668 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:40.668 | 99.99th=[43254] 00:35:40.668 bw ( KiB/s): min= 1792, max= 2104, per=4.19%, avg=1902.74, stdev=72.16, samples=19 00:35:40.668 iops : min= 448, max= 526, avg=475.68, stdev=18.04, samples=19 00:35:40.668 lat (msec) : 10=0.29%, 20=0.53%, 50=99.18% 00:35:40.668 cpu : usr=98.12%, sys=1.48%, ctx=16, majf=0, minf=40 00:35:40.668 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957721: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10012msec) 00:35:40.668 slat (usec): min=8, max=114, avg=45.10, stdev=22.18 00:35:40.668 clat (usec): min=17576, max=78332, avg=33523.80, stdev=2776.67 00:35:40.668 lat (usec): min=17601, max=78370, avg=33568.90, stdev=2775.40 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:40.668 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.668 | 99.00th=[40633], 99.50th=[47449], 99.90th=[72877], 99.95th=[72877], 00:35:40.668 | 99.99th=[78119] 00:35:40.668 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1881.75, stdev=72.65, samples=20 00:35:40.668 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.668 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:40.668 cpu : usr=95.65%, sys=2.52%, ctx=76, majf=0, minf=31 00:35:40.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957722: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:40.668 slat (usec): min=5, max=124, avg=45.30, stdev=24.97 00:35:40.668 clat (usec): min=17516, max=75007, avg=33548.60, stdev=2877.74 00:35:40.668 lat (usec): min=17556, max=75025, avg=33593.90, stdev=2873.22 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:35:40.668 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.668 | 99.00th=[41681], 99.50th=[47449], 99.90th=[74974], 99.95th=[74974], 00:35:40.668 | 99.99th=[74974] 00:35:40.668 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.20, stdev=72.92, samples=20 00:35:40.668 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:40.668 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:40.668 cpu : usr=96.12%, sys=2.44%, ctx=92, majf=0, minf=31 00:35:40.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957723: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=472, BW=1889KiB/s (1935kB/s)(18.5MiB/10026msec) 00:35:40.668 slat (usec): min=8, max=101, avg=26.44, stdev=16.32 00:35:40.668 clat (usec): min=19093, max=53579, avg=33637.94, stdev=1562.12 00:35:40.668 lat (usec): min=19102, max=53598, avg=33664.38, stdev=1560.29 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:40.668 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:40.668 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.668 | 99.00th=[41157], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:35:40.668 | 99.99th=[53740] 00:35:40.668 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:35:40.668 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:35:40.668 lat (msec) : 20=0.08%, 50=99.87%, 100=0.04% 00:35:40.668 cpu : usr=97.98%, sys=1.60%, ctx=22, majf=0, minf=36 00:35:40.668 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.668 filename0: (groupid=0, jobs=1): err= 0: pid=957724: Thu Jul 25 00:15:44 2024 00:35:40.668 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:40.668 slat (usec): min=9, max=139, avg=58.22, stdev=27.27 00:35:40.668 clat (usec): min=28633, max=60913, avg=33434.89, stdev=1995.58 00:35:40.668 lat (usec): min=28657, max=60932, avg=33493.11, stdev=1991.06 00:35:40.668 clat percentiles (usec): 00:35:40.668 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:40.668 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:40.668 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:40.668 | 99.00th=[41157], 99.50th=[42730], 99.90th=[61080], 99.95th=[61080], 00:35:40.668 | 99.99th=[61080] 00:35:40.668 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.668 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.668 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.668 cpu : usr=98.27%, sys=1.30%, ctx=15, majf=0, minf=29 00:35:40.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename0: (groupid=0, jobs=1): err= 0: pid=957725: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.7MiB/10011msec) 00:35:40.669 slat (usec): min=8, max=774, avg=26.63, stdev=31.26 00:35:40.669 clat (usec): min=15622, max=85364, avg=33283.63, stdev=4764.02 00:35:40.669 lat (usec): min=15631, max=85413, avg=33310.25, stdev=4764.04 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[20055], 5.00th=[26084], 10.00th=[27132], 20.00th=[30802], 00:35:40.669 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:40.669 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39584], 95.00th=[40633], 00:35:40.669 | 99.00th=[44827], 99.50th=[52691], 99.90th=[70779], 99.95th=[70779], 00:35:40.669 | 99.99th=[85459] 00:35:40.669 bw ( KiB/s): min= 1680, max= 1984, per=4.22%, avg=1914.40, stdev=69.31, samples=20 00:35:40.669 iops : min= 420, max= 496, avg=478.60, stdev=17.33, samples=20 00:35:40.669 lat (msec) : 20=0.75%, 50=98.50%, 100=0.75% 00:35:40.669 cpu : usr=95.05%, sys=2.53%, ctx=89, majf=0, minf=46 00:35:40.669 IO depths : 1=0.1%, 2=0.2%, 4=2.6%, 8=80.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957726: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:35:40.669 slat (usec): min=8, max=117, avg=39.02, stdev=18.18 00:35:40.669 clat (usec): min=17384, max=59478, avg=33608.86, stdev=2269.63 00:35:40.669 lat (usec): min=17415, max=59517, avg=33647.88, stdev=2270.06 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.669 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.669 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.669 | 99.00th=[45351], 99.50th=[47449], 99.90th=[59507], 99.95th=[59507], 00:35:40.669 | 99.99th=[59507] 00:35:40.669 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.669 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.669 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:40.669 cpu : usr=97.91%, sys=1.59%, ctx=95, majf=0, minf=32 00:35:40.669 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957727: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:40.669 slat (usec): min=9, max=147, avg=32.62, stdev= 8.50 00:35:40.669 clat (usec): min=28685, max=60960, avg=33653.41, stdev=1960.26 00:35:40.669 lat (usec): min=28708, max=60978, avg=33686.03, stdev=1959.54 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.669 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.669 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.669 | 99.00th=[41681], 99.50th=[43254], 99.90th=[61080], 99.95th=[61080], 00:35:40.669 | 99.99th=[61080] 00:35:40.669 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.669 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.669 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.669 cpu : usr=98.09%, sys=1.51%, ctx=16, majf=0, minf=36 00:35:40.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957728: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=467, BW=1869KiB/s (1913kB/s)(18.3MiB/10010msec) 00:35:40.669 slat (usec): min=8, max=161, avg=39.90, stdev=23.22 00:35:40.669 clat (usec): min=13032, max=70549, avg=33995.87, stdev=4145.47 00:35:40.669 lat (usec): min=13088, max=70589, avg=34035.77, stdev=4143.99 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[21890], 5.00th=[30016], 10.00th=[32637], 20.00th=[32900], 00:35:40.669 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:40.669 | 70.00th=[33817], 80.00th=[33817], 90.00th=[35390], 95.00th=[40109], 00:35:40.669 | 99.00th=[49021], 99.50th=[53216], 99.90th=[70779], 99.95th=[70779], 00:35:40.669 | 99.99th=[70779] 00:35:40.669 bw ( KiB/s): min= 1664, max= 1952, per=4.11%, avg=1864.80, stdev=69.74, samples=20 00:35:40.669 iops : min= 416, max= 488, avg=466.20, stdev=17.43, samples=20 00:35:40.669 lat (msec) : 20=0.41%, 50=98.80%, 100=0.79% 00:35:40.669 cpu : usr=98.30%, sys=1.27%, ctx=16, majf=0, minf=38 00:35:40.669 IO depths : 1=0.9%, 2=4.6%, 4=15.8%, 8=65.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=92.3%, 8=3.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957729: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:35:40.669 slat (nsec): min=7600, max=67203, avg=15367.70, stdev=8302.87 00:35:40.669 clat (usec): min=8835, max=43217, avg=33539.78, stdev=2313.36 00:35:40.669 lat (usec): min=8843, max=43236, avg=33555.15, stdev=2313.22 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[21890], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:40.669 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:40.669 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.669 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:40.669 | 99.99th=[43254] 00:35:40.669 bw ( KiB/s): min= 1792, max= 2052, per=4.19%, avg=1900.00, stdev=64.70, samples=19 00:35:40.669 iops : min= 448, max= 513, avg=475.00, stdev=16.18, samples=19 00:35:40.669 lat (msec) : 10=0.48%, 20=0.19%, 50=99.33% 00:35:40.669 cpu : usr=98.29%, sys=1.32%, ctx=14, majf=0, minf=33 00:35:40.669 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957730: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10011msec) 00:35:40.669 slat (usec): min=9, max=112, avg=39.48, stdev=16.31 00:35:40.669 clat (usec): min=28071, max=60339, avg=33576.30, stdev=1781.90 00:35:40.669 lat (usec): min=28113, max=60366, avg=33615.78, stdev=1780.09 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:40.669 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.669 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.669 | 99.00th=[41157], 99.50th=[47449], 99.90th=[54264], 99.95th=[60031], 00:35:40.669 | 99.99th=[60556] 00:35:40.669 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1881.60, stdev=60.18, samples=20 00:35:40.669 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:40.669 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.669 cpu : usr=91.86%, sys=4.11%, ctx=194, majf=0, minf=30 00:35:40.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.669 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.669 filename1: (groupid=0, jobs=1): err= 0: pid=957731: Thu Jul 25 00:15:44 2024 00:35:40.669 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:40.669 slat (usec): min=8, max=154, avg=60.98, stdev=27.15 00:35:40.669 clat (usec): min=28646, max=61031, avg=33407.55, stdev=2024.49 00:35:40.669 lat (usec): min=28661, max=61046, avg=33468.53, stdev=2018.46 00:35:40.669 clat percentiles (usec): 00:35:40.669 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:40.669 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:40.669 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.669 | 99.00th=[41157], 99.50th=[42730], 99.90th=[61080], 99.95th=[61080], 00:35:40.669 | 99.99th=[61080] 00:35:40.670 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.670 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.670 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.670 cpu : usr=97.81%, sys=1.57%, ctx=110, majf=0, minf=33 00:35:40.670 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename1: (groupid=0, jobs=1): err= 0: pid=957732: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10016msec) 00:35:40.670 slat (usec): min=8, max=106, avg=38.33, stdev=15.65 00:35:40.670 clat (usec): min=18567, max=74981, avg=33614.68, stdev=2810.37 00:35:40.670 lat (usec): min=18607, max=75025, avg=33653.01, stdev=2809.60 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.670 | 99.00th=[41681], 99.50th=[47449], 99.90th=[74974], 99.95th=[74974], 00:35:40.670 | 99.99th=[74974] 00:35:40.670 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.20, stdev=72.92, samples=20 00:35:40.670 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:40.670 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:35:40.670 cpu : usr=97.44%, sys=1.90%, ctx=94, majf=0, minf=28 00:35:40.670 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename1: (groupid=0, jobs=1): err= 0: pid=957733: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10014msec) 00:35:40.670 slat (usec): min=6, max=105, avg=31.79, stdev=13.69 00:35:40.670 clat (usec): min=13054, max=54948, avg=33394.67, stdev=3387.25 00:35:40.670 lat (usec): min=13121, max=54983, avg=33426.47, stdev=3387.50 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[20055], 5.00th=[32375], 10.00th=[32900], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:35:40.670 | 99.00th=[46924], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:35:40.670 | 99.99th=[54789] 00:35:40.670 bw ( KiB/s): min= 1792, max= 1984, per=4.18%, avg=1896.00, stdev=55.79, samples=20 00:35:40.670 iops : min= 448, max= 496, avg=474.00, stdev=13.95, samples=20 00:35:40.670 lat (msec) : 20=0.97%, 50=98.44%, 100=0.59% 00:35:40.670 cpu : usr=96.05%, sys=2.54%, ctx=192, majf=0, minf=35 00:35:40.670 IO depths : 1=5.0%, 2=11.1%, 4=24.3%, 8=52.1%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename2: (groupid=0, jobs=1): err= 0: pid=957734: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:35:40.670 slat (nsec): min=10898, max=78053, avg=32659.81, stdev=8715.43 00:35:40.670 clat (usec): min=17594, max=78600, avg=33642.89, stdev=2756.54 00:35:40.670 lat (usec): min=17628, max=78637, avg=33675.55, stdev=2756.55 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.670 | 99.00th=[41681], 99.50th=[47449], 99.90th=[72877], 99.95th=[72877], 00:35:40.670 | 99.99th=[78119] 00:35:40.670 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.670 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.670 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:40.670 cpu : usr=97.99%, sys=1.60%, ctx=16, majf=0, minf=30 00:35:40.670 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename2: (groupid=0, jobs=1): err= 0: pid=957735: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:35:40.670 slat (usec): min=7, max=117, avg=38.33, stdev=28.80 00:35:40.670 clat (usec): min=9150, max=53783, avg=33128.28, stdev=3597.93 00:35:40.670 lat (usec): min=9158, max=53815, avg=33166.61, stdev=3598.83 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[19006], 5.00th=[31589], 10.00th=[32375], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:35:40.670 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:35:40.670 | 99.99th=[53740] 00:35:40.670 bw ( KiB/s): min= 1792, max= 2052, per=4.22%, avg=1913.47, stdev=66.07, samples=19 00:35:40.670 iops : min= 448, max= 513, avg=478.37, stdev=16.52, samples=19 00:35:40.670 lat (msec) : 10=0.33%, 20=1.00%, 50=98.62%, 100=0.04% 00:35:40.670 cpu : usr=97.84%, sys=1.55%, ctx=41, majf=0, minf=39 00:35:40.670 IO depths : 1=4.6%, 2=10.6%, 4=24.4%, 8=52.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename2: (groupid=0, jobs=1): err= 0: pid=957736: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10010msec) 00:35:40.670 slat (usec): min=8, max=132, avg=50.17, stdev=35.54 00:35:40.670 clat (usec): min=27374, max=60568, avg=33489.48, stdev=1925.78 00:35:40.670 lat (usec): min=27417, max=60588, avg=33539.65, stdev=1919.87 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.670 | 99.00th=[40633], 99.50th=[47973], 99.90th=[60556], 99.95th=[60556], 00:35:40.670 | 99.99th=[60556] 00:35:40.670 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1881.60, stdev=60.18, samples=20 00:35:40.670 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:40.670 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.670 cpu : usr=98.15%, sys=1.42%, ctx=14, majf=0, minf=32 00:35:40.670 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename2: (groupid=0, jobs=1): err= 0: pid=957737: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:40.670 slat (usec): min=9, max=107, avg=32.40, stdev= 9.42 00:35:40.670 clat (usec): min=28614, max=60898, avg=33666.24, stdev=1955.18 00:35:40.670 lat (usec): min=28632, max=60917, avg=33698.63, stdev=1954.30 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.670 | 99.00th=[41681], 99.50th=[42730], 99.90th=[61080], 99.95th=[61080], 00:35:40.670 | 99.99th=[61080] 00:35:40.670 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:40.670 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:40.670 lat (msec) : 50=99.66%, 100=0.34% 00:35:40.670 cpu : usr=97.31%, sys=2.10%, ctx=25, majf=0, minf=33 00:35:40.670 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.670 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.670 filename2: (groupid=0, jobs=1): err= 0: pid=957738: Thu Jul 25 00:15:44 2024 00:35:40.670 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10005msec) 00:35:40.670 slat (usec): min=6, max=120, avg=41.71, stdev=24.19 00:35:40.670 clat (usec): min=19633, max=69119, avg=33687.79, stdev=2734.25 00:35:40.670 lat (usec): min=19666, max=69137, avg=33729.50, stdev=2733.15 00:35:40.670 clat percentiles (usec): 00:35:40.670 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:40.670 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.670 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.670 | 99.00th=[47449], 99.50th=[47973], 99.90th=[68682], 99.95th=[68682], 00:35:40.670 | 99.99th=[68682] 00:35:40.670 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=73.20, samples=19 00:35:40.670 iops : min= 416, max= 480, avg=469.89, stdev=18.30, samples=19 00:35:40.671 lat (msec) : 20=0.19%, 50=99.47%, 100=0.34% 00:35:40.671 cpu : usr=98.37%, sys=1.22%, ctx=15, majf=0, minf=32 00:35:40.671 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:40.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.671 filename2: (groupid=0, jobs=1): err= 0: pid=957739: Thu Jul 25 00:15:44 2024 00:35:40.671 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10025msec) 00:35:40.671 slat (nsec): min=8627, max=97063, avg=32650.05, stdev=11397.45 00:35:40.671 clat (usec): min=13413, max=44823, avg=33476.87, stdev=1809.68 00:35:40.671 lat (usec): min=13471, max=44840, avg=33509.52, stdev=1807.63 00:35:40.671 clat percentiles (usec): 00:35:40.671 | 1.00th=[28967], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:40.671 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.671 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.671 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[44827], 00:35:40.671 | 99.99th=[44827] 00:35:40.671 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.20, stdev=52.44, samples=20 00:35:40.671 iops : min= 448, max= 480, avg=473.55, stdev=13.11, samples=20 00:35:40.671 lat (msec) : 20=0.46%, 50=99.54% 00:35:40.671 cpu : usr=89.72%, sys=5.09%, ctx=588, majf=0, minf=31 00:35:40.671 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:40.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.671 filename2: (groupid=0, jobs=1): err= 0: pid=957740: Thu Jul 25 00:15:44 2024 00:35:40.671 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:35:40.671 slat (usec): min=6, max=116, avg=34.14, stdev=12.50 00:35:40.671 clat (usec): min=17531, max=73625, avg=33641.23, stdev=2784.06 00:35:40.671 lat (usec): min=17564, max=73644, avg=33675.37, stdev=2782.68 00:35:40.671 clat percentiles (usec): 00:35:40.671 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:40.671 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:40.671 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:40.671 | 99.00th=[41681], 99.50th=[47449], 99.90th=[73925], 99.95th=[73925], 00:35:40.671 | 99.99th=[73925] 00:35:40.671 bw ( KiB/s): min= 1660, max= 1920, per=4.15%, avg=1881.40, stdev=73.75, samples=20 00:35:40.671 iops : min= 415, max= 480, avg=470.35, stdev=18.44, samples=20 00:35:40.671 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:40.671 cpu : usr=94.64%, sys=2.90%, ctx=483, majf=0, minf=28 00:35:40.671 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:40.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.671 filename2: (groupid=0, jobs=1): err= 0: pid=957741: Thu Jul 25 00:15:44 2024 00:35:40.671 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10012msec) 00:35:40.671 slat (usec): min=8, max=113, avg=23.67, stdev=18.38 00:35:40.671 clat (usec): min=11719, max=70525, avg=32721.36, stdev=5354.36 00:35:40.671 lat (usec): min=11730, max=70563, avg=32745.03, stdev=5355.38 00:35:40.671 clat percentiles (usec): 00:35:40.671 | 1.00th=[18744], 5.00th=[23987], 10.00th=[26084], 20.00th=[28705], 00:35:40.671 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:35:40.671 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38536], 95.00th=[40633], 00:35:40.671 | 99.00th=[47449], 99.50th=[51119], 99.90th=[70779], 99.95th=[70779], 00:35:40.671 | 99.99th=[70779] 00:35:40.671 bw ( KiB/s): min= 1664, max= 2320, per=4.29%, avg=1944.80, stdev=127.52, samples=20 00:35:40.671 iops : min= 416, max= 580, avg=486.20, stdev=31.88, samples=20 00:35:40.671 lat (msec) : 20=2.46%, 50=96.64%, 100=0.90% 00:35:40.671 cpu : usr=94.42%, sys=3.00%, ctx=155, majf=0, minf=26 00:35:40.671 IO depths : 1=0.4%, 2=1.7%, 4=7.1%, 8=75.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:35:40.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 complete : 0=0.0%, 4=90.0%, 8=7.3%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.671 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:40.671 00:35:40.671 Run status group 0 (all jobs): 00:35:40.671 READ: bw=44.3MiB/s (46.5MB/s), 1869KiB/s-1949KiB/s (1913kB/s-1996kB/s), io=444MiB (466MB), run=10002-10026msec 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:40.671 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 bdev_null0 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 [2024-07-25 00:15:45.060647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 bdev_null1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.672 { 00:35:40.672 "params": { 00:35:40.672 "name": "Nvme$subsystem", 00:35:40.672 "trtype": "$TEST_TRANSPORT", 00:35:40.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.672 "adrfam": "ipv4", 00:35:40.672 "trsvcid": "$NVMF_PORT", 00:35:40.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.672 "hdgst": ${hdgst:-false}, 00:35:40.672 "ddgst": ${ddgst:-false} 00:35:40.672 }, 00:35:40.672 "method": "bdev_nvme_attach_controller" 00:35:40.672 } 00:35:40.672 EOF 00:35:40.672 )") 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.672 { 00:35:40.672 "params": { 00:35:40.672 "name": "Nvme$subsystem", 00:35:40.672 "trtype": "$TEST_TRANSPORT", 00:35:40.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.672 "adrfam": "ipv4", 00:35:40.672 "trsvcid": "$NVMF_PORT", 00:35:40.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.672 "hdgst": ${hdgst:-false}, 00:35:40.672 "ddgst": ${ddgst:-false} 00:35:40.672 }, 00:35:40.672 "method": "bdev_nvme_attach_controller" 00:35:40.672 } 00:35:40.672 EOF 00:35:40.672 )") 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.672 "params": { 00:35:40.672 "name": "Nvme0", 00:35:40.672 "trtype": "tcp", 00:35:40.672 "traddr": "10.0.0.2", 00:35:40.672 "adrfam": "ipv4", 00:35:40.672 "trsvcid": "4420", 00:35:40.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.672 "hdgst": false, 00:35:40.672 "ddgst": false 00:35:40.672 }, 00:35:40.672 "method": "bdev_nvme_attach_controller" 00:35:40.672 },{ 00:35:40.672 "params": { 00:35:40.672 "name": "Nvme1", 00:35:40.672 "trtype": "tcp", 00:35:40.672 "traddr": "10.0.0.2", 00:35:40.672 "adrfam": "ipv4", 00:35:40.672 "trsvcid": "4420", 00:35:40.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.672 "hdgst": false, 00:35:40.672 "ddgst": false 00:35:40.672 }, 00:35:40.672 "method": "bdev_nvme_attach_controller" 00:35:40.672 }' 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.672 00:15:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.672 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:40.672 ... 00:35:40.672 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:40.673 ... 00:35:40.673 fio-3.35 00:35:40.673 Starting 4 threads 00:35:40.673 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.938 00:35:45.938 filename0: (groupid=0, jobs=1): err= 0: pid=959047: Thu Jul 25 00:15:51 2024 00:35:45.938 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5002msec) 00:35:45.938 slat (nsec): min=6512, max=78609, avg=15393.32, stdev=8752.48 00:35:45.938 clat (usec): min=1489, max=8327, avg=4312.21, stdev=714.12 00:35:45.938 lat (usec): min=1501, max=8364, avg=4327.60, stdev=713.96 00:35:45.938 clat percentiles (usec): 00:35:45.938 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3687], 20.00th=[ 3884], 00:35:45.938 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:45.938 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5932], 00:35:45.938 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8225], 00:35:45.938 | 99.99th=[ 8356] 00:35:45.938 bw ( KiB/s): min=14080, max=15888, per=24.99%, avg=14673.78, stdev=595.47, samples=9 00:35:45.938 iops : min= 1760, max= 1986, avg=1834.22, stdev=74.43, samples=9 00:35:45.938 lat (msec) : 2=0.02%, 4=29.32%, 10=70.65% 00:35:45.938 cpu : usr=92.20%, sys=6.06%, ctx=173, majf=0, minf=9 00:35:45.938 IO depths : 1=0.5%, 2=8.0%, 4=63.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 issued rwts: total=9170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:45.938 filename0: (groupid=0, jobs=1): err= 0: pid=959048: Thu Jul 25 00:15:51 2024 00:35:45.938 read: IOPS=1791, BW=14.0MiB/s (14.7MB/s)(70.0MiB/5001msec) 00:35:45.938 slat (nsec): min=3891, max=57524, avg=11956.63, stdev=5205.86 00:35:45.938 clat (usec): min=1391, max=9421, avg=4428.59, stdev=767.49 00:35:45.938 lat (usec): min=1409, max=9429, avg=4440.55, stdev=766.81 00:35:45.938 clat percentiles (usec): 00:35:45.938 | 1.00th=[ 3228], 5.00th=[ 3621], 10.00th=[ 3785], 20.00th=[ 3916], 00:35:45.938 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:35:45.938 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5669], 95.00th=[ 6194], 00:35:45.938 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 7963], 99.95th=[ 8586], 00:35:45.938 | 99.99th=[ 9372] 00:35:45.938 bw ( KiB/s): min=12912, max=15120, per=24.42%, avg=14341.33, stdev=765.20, samples=9 00:35:45.938 iops : min= 1614, max= 1890, avg=1792.67, stdev=95.65, samples=9 00:35:45.938 lat (msec) : 2=0.03%, 4=25.16%, 10=74.81% 00:35:45.938 cpu : usr=94.02%, sys=5.52%, ctx=9, majf=0, minf=9 00:35:45.938 IO depths : 1=0.1%, 2=3.0%, 4=68.1%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 issued rwts: total=8960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:45.938 filename1: (groupid=0, jobs=1): err= 0: pid=959049: Thu Jul 25 00:15:51 2024 00:35:45.938 read: IOPS=1873, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5003msec) 00:35:45.938 slat (nsec): min=3784, max=57091, avg=11762.04, stdev=5083.54 00:35:45.938 clat (usec): min=1578, max=8164, avg=4233.60, stdev=668.47 00:35:45.938 lat (usec): min=1590, max=8177, avg=4245.36, stdev=668.38 00:35:45.938 clat percentiles (usec): 00:35:45.938 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3818], 00:35:45.938 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:35:45.938 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5800], 00:35:45.938 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7177], 99.95th=[ 7308], 00:35:45.938 | 99.99th=[ 8160] 00:35:45.938 bw ( KiB/s): min=14192, max=15904, per=25.52%, avg=14983.80, stdev=507.72, samples=10 00:35:45.938 iops : min= 1774, max= 1988, avg=1872.90, stdev=63.40, samples=10 00:35:45.938 lat (msec) : 2=0.06%, 4=31.92%, 10=68.02% 00:35:45.938 cpu : usr=93.38%, sys=6.12%, ctx=12, majf=0, minf=9 00:35:45.938 IO depths : 1=0.1%, 2=3.9%, 4=68.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 issued rwts: total=9371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:45.938 filename1: (groupid=0, jobs=1): err= 0: pid=959051: Thu Jul 25 00:15:51 2024 00:35:45.938 read: IOPS=1843, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5001msec) 00:35:45.938 slat (nsec): min=3838, max=55958, avg=12323.70, stdev=5297.45 00:35:45.938 clat (usec): min=800, max=7704, avg=4300.69, stdev=652.79 00:35:45.938 lat (usec): min=808, max=7718, avg=4313.02, stdev=652.61 00:35:45.938 clat percentiles (usec): 00:35:45.938 | 1.00th=[ 2933], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3916], 00:35:45.938 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:45.938 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5080], 95.00th=[ 5735], 00:35:45.938 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7439], 00:35:45.938 | 99.99th=[ 7701] 00:35:45.938 bw ( KiB/s): min=14080, max=15504, per=25.07%, avg=14721.44, stdev=392.59, samples=9 00:35:45.938 iops : min= 1760, max= 1938, avg=1840.11, stdev=49.07, samples=9 00:35:45.938 lat (usec) : 1000=0.03% 00:35:45.938 lat (msec) : 2=0.07%, 4=27.87%, 10=72.03% 00:35:45.938 cpu : usr=94.02%, sys=5.50%, ctx=9, majf=0, minf=9 00:35:45.938 IO depths : 1=0.1%, 2=4.8%, 4=65.5%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.938 issued rwts: total=9221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:45.938 00:35:45.938 Run status group 0 (all jobs): 00:35:45.938 READ: bw=57.3MiB/s (60.1MB/s), 14.0MiB/s-14.6MiB/s (14.7MB/s-15.3MB/s), io=287MiB (301MB), run=5001-5003msec 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.938 00:35:45.938 real 0m24.276s 00:35:45.938 user 4m28.492s 00:35:45.938 sys 0m8.680s 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 ************************************ 00:35:45.938 END TEST fio_dif_rand_params 00:35:45.938 ************************************ 00:35:45.938 00:15:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:45.938 00:15:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:45.938 00:15:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:45.938 00:15:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 ************************************ 00:35:45.938 START TEST fio_dif_digest 00:35:45.938 ************************************ 00:35:45.938 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:45.938 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:45.938 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:45.938 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.939 bdev_null0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.939 [2024-07-25 00:15:51.593356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:45.939 { 00:35:45.939 "params": { 00:35:45.939 "name": "Nvme$subsystem", 00:35:45.939 "trtype": "$TEST_TRANSPORT", 00:35:45.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.939 "adrfam": "ipv4", 00:35:45.939 "trsvcid": "$NVMF_PORT", 00:35:45.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.939 "hdgst": ${hdgst:-false}, 00:35:45.939 "ddgst": ${ddgst:-false} 00:35:45.939 }, 00:35:45.939 "method": "bdev_nvme_attach_controller" 00:35:45.939 } 00:35:45.939 EOF 00:35:45.939 )") 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:45.939 "params": { 00:35:45.939 "name": "Nvme0", 00:35:45.939 "trtype": "tcp", 00:35:45.939 "traddr": "10.0.0.2", 00:35:45.939 "adrfam": "ipv4", 00:35:45.939 "trsvcid": "4420", 00:35:45.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.939 "hdgst": true, 00:35:45.939 "ddgst": true 00:35:45.939 }, 00:35:45.939 "method": "bdev_nvme_attach_controller" 00:35:45.939 }' 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.939 00:15:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.197 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:46.197 ... 00:35:46.197 fio-3.35 00:35:46.197 Starting 3 threads 00:35:46.197 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.424 00:35:58.424 filename0: (groupid=0, jobs=1): err= 0: pid=959879: Thu Jul 25 00:16:02 2024 00:35:58.424 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(261MiB/10046msec) 00:35:58.424 slat (nsec): min=4967, max=53768, avg=18305.16, stdev=5476.59 00:35:58.424 clat (usec): min=8543, max=51481, avg=14417.68, stdev=1838.77 00:35:58.424 lat (usec): min=8565, max=51502, avg=14435.99, stdev=1837.99 00:35:58.424 clat percentiles (usec): 00:35:58.424 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[12649], 20.00th=[13435], 00:35:58.424 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:35:58.424 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:35:58.424 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19530], 99.95th=[46924], 00:35:58.424 | 99.99th=[51643] 00:35:58.424 bw ( KiB/s): min=24320, max=28160, per=34.45%, avg=26649.60, stdev=1141.55, samples=20 00:35:58.424 iops : min= 190, max= 220, avg=208.20, stdev= 8.92, samples=20 00:35:58.424 lat (msec) : 10=1.49%, 20=98.42%, 50=0.05%, 100=0.05% 00:35:58.424 cpu : usr=92.54%, sys=6.92%, ctx=38, majf=0, minf=133 00:35:58.424 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.424 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.424 filename0: (groupid=0, jobs=1): err= 0: pid=959880: Thu Jul 25 00:16:02 2024 00:35:58.424 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(259MiB/10047msec) 00:35:58.424 slat (nsec): min=7325, max=68424, avg=15190.31, stdev=4687.93 00:35:58.424 clat (usec): min=8882, max=57117, avg=14513.20, stdev=3875.07 00:35:58.424 lat (usec): min=8899, max=57131, avg=14528.39, stdev=3875.04 00:35:58.424 clat percentiles (usec): 00:35:58.424 | 1.00th=[10159], 5.00th=[12256], 10.00th=[12780], 20.00th=[13304], 00:35:58.424 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:35:58.424 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15795], 95.00th=[16319], 00:35:58.424 | 99.00th=[18220], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:35:58.424 | 99.99th=[56886] 00:35:58.424 bw ( KiB/s): min=23086, max=28672, per=34.23%, avg=26472.70, stdev=1524.75, samples=20 00:35:58.424 iops : min= 180, max= 224, avg=206.80, stdev=11.95, samples=20 00:35:58.424 lat (msec) : 10=0.82%, 20=98.36%, 50=0.05%, 100=0.77% 00:35:58.424 cpu : usr=91.70%, sys=7.84%, ctx=25, majf=0, minf=193 00:35:58.424 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.425 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.425 filename0: (groupid=0, jobs=1): err= 0: pid=959881: Thu Jul 25 00:16:02 2024 00:35:58.425 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(240MiB/10046msec) 00:35:58.425 slat (nsec): min=4836, max=41257, avg=15456.96, stdev=4528.22 00:35:58.425 clat (usec): min=8332, max=59418, avg=15666.40, stdev=3666.38 00:35:58.425 lat (usec): min=8352, max=59431, avg=15681.85, stdev=3666.19 00:35:58.425 clat percentiles (usec): 00:35:58.425 | 1.00th=[10028], 5.00th=[13042], 10.00th=[13698], 20.00th=[14353], 00:35:58.425 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:35:58.425 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17957], 00:35:58.425 | 99.00th=[19792], 99.50th=[54789], 99.90th=[57410], 99.95th=[59507], 00:35:58.425 | 99.99th=[59507] 00:35:58.425 bw ( KiB/s): min=22272, max=26624, per=31.66%, avg=24486.40, stdev=1279.39, samples=20 00:35:58.425 iops : min= 174, max= 208, avg=191.30, stdev=10.00, samples=20 00:35:58.425 lat (msec) : 10=0.73%, 20=98.49%, 50=0.10%, 100=0.68% 00:35:58.425 cpu : usr=92.56%, sys=6.97%, ctx=24, majf=0, minf=131 00:35:58.425 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.425 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.425 00:35:58.425 Run status group 0 (all jobs): 00:35:58.425 READ: bw=75.5MiB/s (79.2MB/s), 23.8MiB/s-25.9MiB/s (25.0MB/s-27.2MB/s), io=759MiB (796MB), run=10046-10047msec 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.425 00:35:58.425 real 0m11.155s 00:35:58.425 user 0m28.894s 00:35:58.425 sys 0m2.455s 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:58.425 00:16:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.425 ************************************ 00:35:58.425 END TEST fio_dif_digest 00:35:58.425 ************************************ 00:35:58.425 00:16:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:58.425 00:16:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:58.425 rmmod nvme_tcp 00:35:58.425 rmmod nvme_fabrics 00:35:58.425 rmmod nvme_keyring 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 953323 ']' 00:35:58.425 00:16:02 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 953323 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 953323 ']' 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 953323 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 953323 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 953323' 00:35:58.425 killing process with pid 953323 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@965 -- # kill 953323 00:35:58.425 00:16:02 nvmf_dif -- common/autotest_common.sh@970 -- # wait 953323 00:35:58.425 00:16:03 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:58.425 00:16:03 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.425 Waiting for block devices as requested 00:35:58.682 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:58.682 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:58.682 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:58.939 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:58.939 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:58.939 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:58.939 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:59.197 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:59.197 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:59.197 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:59.455 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:59.455 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:59.455 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:59.455 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:59.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:59.713 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:59.713 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:59.713 00:16:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:59.713 00:16:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:59.713 00:16:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:59.713 00:16:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:59.713 00:16:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.713 00:16:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:59.713 00:16:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.244 00:16:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:02.244 00:36:02.244 real 1m6.550s 00:36:02.244 user 6m22.559s 00:36:02.244 sys 0m21.337s 00:36:02.244 00:16:07 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.244 00:16:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.244 ************************************ 00:36:02.244 END TEST nvmf_dif 00:36:02.244 ************************************ 00:36:02.244 00:16:07 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:02.244 00:16:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:02.244 00:16:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:02.244 00:16:07 -- common/autotest_common.sh@10 -- # set +x 00:36:02.244 ************************************ 00:36:02.244 START TEST nvmf_abort_qd_sizes 00:36:02.244 ************************************ 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:02.244 * Looking for test storage... 00:36:02.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:02.244 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:02.245 00:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.145 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:04.146 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:04.146 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:04.146 Found net devices under 0000:09:00.0: cvl_0_0 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:04.146 Found net devices under 0000:09:00.1: cvl_0_1 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:04.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:36:04.146 00:36:04.146 --- 10.0.0.2 ping statistics --- 00:36:04.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.146 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:36:04.146 00:36:04.146 --- 10.0.0.1 ping statistics --- 00:36:04.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.146 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:04.146 00:16:09 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:05.082 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:05.083 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:05.083 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:06.019 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=964661 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 964661 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 964661 ']' 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:06.278 00:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.278 [2024-07-25 00:16:11.893588] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:06.278 [2024-07-25 00:16:11.893675] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.278 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.278 [2024-07-25 00:16:11.956615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.536 [2024-07-25 00:16:12.044838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.536 [2024-07-25 00:16:12.044902] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.536 [2024-07-25 00:16:12.044916] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.536 [2024-07-25 00:16:12.044926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.536 [2024-07-25 00:16:12.044951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.536 [2024-07-25 00:16:12.045040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.536 [2024-07-25 00:16:12.045112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.536 [2024-07-25 00:16:12.045170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:06.536 [2024-07-25 00:16:12.045173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:36:06.536 00:16:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:06.537 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:06.537 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:06.537 00:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.537 ************************************ 00:36:06.537 START TEST spdk_target_abort 00:36:06.537 ************************************ 00:36:06.537 00:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:06.537 00:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:06.537 00:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:36:06.537 00:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.537 00:16:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.814 spdk_targetn1 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.814 [2024-07-25 00:16:15.048026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.814 [2024-07-25 00:16:15.080320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.814 00:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.814 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.091 Initializing NVMe Controllers 00:36:13.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:13.091 Initialization complete. Launching workers. 00:36:13.091 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8558, failed: 0 00:36:13.091 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1355, failed to submit 7203 00:36:13.091 success 755, unsuccess 600, failed 0 00:36:13.091 00:16:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:13.091 00:16:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.091 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.367 Initializing NVMe Controllers 00:36:16.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.367 Initialization complete. Launching workers. 00:36:16.367 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8580, failed: 0 00:36:16.367 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1257, failed to submit 7323 00:36:16.367 success 327, unsuccess 930, failed 0 00:36:16.367 00:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:16.367 00:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.367 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.642 Initializing NVMe Controllers 00:36:19.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.642 Initialization complete. Launching workers. 00:36:19.642 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29612, failed: 0 00:36:19.642 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2731, failed to submit 26881 00:36:19.642 success 497, unsuccess 2234, failed 0 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.642 00:16:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 964661 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 964661 ']' 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 964661 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 964661 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 964661' 00:36:20.573 killing process with pid 964661 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 964661 00:36:20.573 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 964661 00:36:20.831 00:36:20.831 real 0m14.115s 00:36:20.831 user 0m52.611s 00:36:20.831 sys 0m2.846s 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.831 ************************************ 00:36:20.831 END TEST spdk_target_abort 00:36:20.831 ************************************ 00:36:20.831 00:16:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:20.831 00:16:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:20.831 00:16:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:20.831 00:16:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.831 ************************************ 00:36:20.831 START TEST kernel_target_abort 00:36:20.831 ************************************ 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:20.831 00:16:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:21.765 Waiting for block devices as requested 00:36:21.765 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.026 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:22.026 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:22.026 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:22.026 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:22.316 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:22.316 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:22.316 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:22.316 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.575 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.575 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:22.575 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:22.834 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:22.834 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:22.834 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:22.834 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:23.094 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:23.094 No valid GPT data, bailing 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:23.094 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:36:23.353 00:36:23.353 Discovery Log Number of Records 2, Generation counter 2 00:36:23.353 =====Discovery Log Entry 0====== 00:36:23.353 trtype: tcp 00:36:23.353 adrfam: ipv4 00:36:23.353 subtype: current discovery subsystem 00:36:23.353 treq: not specified, sq flow control disable supported 00:36:23.353 portid: 1 00:36:23.353 trsvcid: 4420 00:36:23.353 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:23.353 traddr: 10.0.0.1 00:36:23.353 eflags: none 00:36:23.353 sectype: none 00:36:23.353 =====Discovery Log Entry 1====== 00:36:23.353 trtype: tcp 00:36:23.353 adrfam: ipv4 00:36:23.353 subtype: nvme subsystem 00:36:23.353 treq: not specified, sq flow control disable supported 00:36:23.353 portid: 1 00:36:23.353 trsvcid: 4420 00:36:23.353 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:23.353 traddr: 10.0.0.1 00:36:23.353 eflags: none 00:36:23.353 sectype: none 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:23.353 00:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.353 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.634 Initializing NVMe Controllers 00:36:26.634 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:26.634 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:26.634 Initialization complete. Launching workers. 00:36:26.634 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30758, failed: 0 00:36:26.634 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30758, failed to submit 0 00:36:26.634 success 0, unsuccess 30758, failed 0 00:36:26.634 00:16:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:26.634 00:16:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.914 Initializing NVMe Controllers 00:36:29.914 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:29.914 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:29.914 Initialization complete. Launching workers. 00:36:29.914 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62400, failed: 0 00:36:29.914 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15742, failed to submit 46658 00:36:29.914 success 0, unsuccess 15742, failed 0 00:36:29.914 00:16:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:29.914 00:16:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:29.914 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.441 Initializing NVMe Controllers 00:36:32.441 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:32.441 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:32.441 Initialization complete. Launching workers. 00:36:32.441 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61246, failed: 0 00:36:32.441 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15294, failed to submit 45952 00:36:32.441 success 0, unsuccess 15294, failed 0 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:32.441 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:32.699 00:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:33.633 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:33.633 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:33.633 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:33.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:34.826 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:34.826 00:36:34.826 real 0m14.109s 00:36:34.826 user 0m4.942s 00:36:34.826 sys 0m3.303s 00:36:34.826 00:16:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:34.826 00:16:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:34.826 ************************************ 00:36:34.826 END TEST kernel_target_abort 00:36:34.826 ************************************ 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:34.826 rmmod nvme_tcp 00:36:34.826 rmmod nvme_fabrics 00:36:34.826 rmmod nvme_keyring 00:36:34.826 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 964661 ']' 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 964661 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 964661 ']' 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 964661 00:36:35.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (964661) - No such process 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 964661 is not found' 00:36:35.084 Process with pid 964661 is not found 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:35.084 00:16:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:36.018 Waiting for block devices as requested 00:36:36.018 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:36.018 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:36.276 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:36.276 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:36.276 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:36.276 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:36.276 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:36.535 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:36.535 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:36.792 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:36.792 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:36.792 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:36.792 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:37.051 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:37.051 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:37.051 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:37.051 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.310 00:16:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.210 00:16:44 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:39.210 00:36:39.210 real 0m37.390s 00:36:39.210 user 0m59.498s 00:36:39.210 sys 0m9.400s 00:36:39.210 00:16:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:39.210 00:16:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:39.210 ************************************ 00:36:39.210 END TEST nvmf_abort_qd_sizes 00:36:39.210 ************************************ 00:36:39.210 00:16:44 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:39.210 00:16:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:39.210 00:16:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:39.210 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:36:39.469 ************************************ 00:36:39.469 START TEST keyring_file 00:36:39.469 ************************************ 00:36:39.469 00:16:44 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:39.469 * Looking for test storage... 00:36:39.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.469 00:16:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.469 00:16:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.469 00:16:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.469 00:16:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.469 00:16:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.469 00:16:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.469 00:16:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:39.469 00:16:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:39.469 00:16:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:39.469 00:16:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7BH89CKXRs 00:36:39.469 00:16:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7BH89CKXRs 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7BH89CKXRs 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7BH89CKXRs 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yrJXMdPnsx 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:39.469 00:16:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yrJXMdPnsx 00:36:39.469 00:16:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yrJXMdPnsx 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yrJXMdPnsx 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=970421 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:39.469 00:16:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 970421 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 970421 ']' 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:39.470 00:16:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.470 [2024-07-25 00:16:45.128883] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:39.470 [2024-07-25 00:16:45.128964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970421 ] 00:36:39.470 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.727 [2024-07-25 00:16:45.193304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.727 [2024-07-25 00:16:45.290835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:39.985 00:16:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.985 [2024-07-25 00:16:45.563995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.985 null0 00:36:39.985 [2024-07-25 00:16:45.596042] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:39.985 [2024-07-25 00:16:45.596554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:39.985 [2024-07-25 00:16:45.604052] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.985 00:16:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.985 [2024-07-25 00:16:45.612069] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:39.985 request: 00:36:39.985 { 00:36:39.985 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.985 "secure_channel": false, 00:36:39.985 "listen_address": { 00:36:39.985 "trtype": "tcp", 00:36:39.985 "traddr": "127.0.0.1", 00:36:39.985 "trsvcid": "4420" 00:36:39.985 }, 00:36:39.985 "method": "nvmf_subsystem_add_listener", 00:36:39.985 "req_id": 1 00:36:39.985 } 00:36:39.985 Got JSON-RPC error response 00:36:39.985 response: 00:36:39.985 { 00:36:39.985 "code": -32602, 00:36:39.985 "message": "Invalid parameters" 00:36:39.985 } 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:39.985 00:16:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=970429 00:36:39.985 00:16:45 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:39.985 00:16:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 970429 /var/tmp/bperf.sock 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 970429 ']' 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:39.985 00:16:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.985 [2024-07-25 00:16:45.658063] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:39.985 [2024-07-25 00:16:45.658163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970429 ] 00:36:39.985 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.243 [2024-07-25 00:16:45.718441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.243 [2024-07-25 00:16:45.811650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.243 00:16:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:40.243 00:16:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:40.243 00:16:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:40.243 00:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:40.500 00:16:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yrJXMdPnsx 00:36:40.500 00:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yrJXMdPnsx 00:36:40.758 00:16:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:40.758 00:16:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:40.758 00:16:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.758 00:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.758 00:16:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.015 00:16:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.7BH89CKXRs == \/\t\m\p\/\t\m\p\.\7\B\H\8\9\C\K\X\R\s ]] 00:36:41.015 00:16:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:41.015 00:16:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:41.015 00:16:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.015 00:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.016 00:16:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.273 00:16:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yrJXMdPnsx == \/\t\m\p\/\t\m\p\.\y\r\J\X\M\d\P\n\s\x ]] 00:36:41.273 00:16:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:41.273 00:16:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.273 00:16:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.273 00:16:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.273 00:16:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.273 00:16:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.530 00:16:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:41.530 00:16:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:41.530 00:16:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:41.530 00:16:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.530 00:16:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.530 00:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.530 00:16:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.787 00:16:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:41.787 00:16:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.788 00:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.045 [2024-07-25 00:16:47.620171] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:42.045 nvme0n1 00:36:42.045 00:16:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:42.045 00:16:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.045 00:16:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.045 00:16:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.045 00:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.045 00:16:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.302 00:16:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:42.302 00:16:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:42.302 00:16:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:42.302 00:16:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.302 00:16:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.302 00:16:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.302 00:16:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.559 00:16:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:42.559 00:16:48 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.816 Running I/O for 1 seconds... 00:36:43.771 00:36:43.771 Latency(us) 00:36:43.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.771 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:43.771 nvme0n1 : 1.03 4585.41 17.91 0.00 0.00 27549.91 10922.67 40777.96 00:36:43.771 =================================================================================================================== 00:36:43.771 Total : 4585.41 17.91 0.00 0.00 27549.91 10922.67 40777.96 00:36:43.771 0 00:36:43.771 00:16:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:43.771 00:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:44.029 00:16:49 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:44.029 00:16:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.029 00:16:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.029 00:16:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.029 00:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.029 00:16:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.287 00:16:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:44.287 00:16:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:44.287 00:16:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.287 00:16:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.287 00:16:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.287 00:16:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.287 00:16:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.545 00:16:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:44.545 00:16:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.545 00:16:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:44.545 00:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:44.803 [2024-07-25 00:16:50.371730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:44.803 [2024-07-25 00:16:50.372252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f3310 (107): Transport endpoint is not connected 00:36:44.803 [2024-07-25 00:16:50.373245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f3310 (9): Bad file descriptor 00:36:44.803 [2024-07-25 00:16:50.374244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:44.803 [2024-07-25 00:16:50.374263] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:44.803 [2024-07-25 00:16:50.374276] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:44.803 request: 00:36:44.803 { 00:36:44.803 "name": "nvme0", 00:36:44.803 "trtype": "tcp", 00:36:44.803 "traddr": "127.0.0.1", 00:36:44.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.803 "adrfam": "ipv4", 00:36:44.803 "trsvcid": "4420", 00:36:44.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.803 "psk": "key1", 00:36:44.803 "method": "bdev_nvme_attach_controller", 00:36:44.803 "req_id": 1 00:36:44.803 } 00:36:44.803 Got JSON-RPC error response 00:36:44.803 response: 00:36:44.803 { 00:36:44.803 "code": -5, 00:36:44.803 "message": "Input/output error" 00:36:44.803 } 00:36:44.803 00:16:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:44.803 00:16:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:44.803 00:16:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:44.803 00:16:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:44.803 00:16:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:44.803 00:16:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.803 00:16:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.803 00:16:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.803 00:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.803 00:16:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.061 00:16:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:45.061 00:16:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:45.061 00:16:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:45.061 00:16:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.061 00:16:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.061 00:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.061 00:16:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.319 00:16:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:45.319 00:16:50 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:45.319 00:16:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:45.576 00:16:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:45.576 00:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:45.834 00:16:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:45.834 00:16:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:45.834 00:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.090 00:16:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:46.090 00:16:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.7BH89CKXRs 00:36:46.090 00:16:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.090 00:16:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.090 00:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.347 [2024-07-25 00:16:51.873200] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7BH89CKXRs': 0100660 00:36:46.347 [2024-07-25 00:16:51.873234] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:46.347 request: 00:36:46.347 { 00:36:46.347 "name": "key0", 00:36:46.347 "path": "/tmp/tmp.7BH89CKXRs", 00:36:46.347 "method": "keyring_file_add_key", 00:36:46.347 "req_id": 1 00:36:46.347 } 00:36:46.347 Got JSON-RPC error response 00:36:46.347 response: 00:36:46.347 { 00:36:46.347 "code": -1, 00:36:46.347 "message": "Operation not permitted" 00:36:46.347 } 00:36:46.347 00:16:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:46.347 00:16:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:46.347 00:16:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:46.347 00:16:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:46.347 00:16:51 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.7BH89CKXRs 00:36:46.347 00:16:51 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.347 00:16:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7BH89CKXRs 00:36:46.604 00:16:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.7BH89CKXRs 00:36:46.604 00:16:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:46.604 00:16:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.604 00:16:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.604 00:16:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.604 00:16:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.604 00:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.860 00:16:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:46.860 00:16:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.860 00:16:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.860 00:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.118 [2024-07-25 00:16:52.643304] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7BH89CKXRs': No such file or directory 00:36:47.118 [2024-07-25 00:16:52.643344] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:47.118 [2024-07-25 00:16:52.643371] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:47.118 [2024-07-25 00:16:52.643383] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:47.118 [2024-07-25 00:16:52.643394] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:47.118 request: 00:36:47.118 { 00:36:47.118 "name": "nvme0", 00:36:47.118 "trtype": "tcp", 00:36:47.118 "traddr": "127.0.0.1", 00:36:47.118 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.118 "adrfam": "ipv4", 00:36:47.118 "trsvcid": "4420", 00:36:47.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.118 "psk": "key0", 00:36:47.118 "method": "bdev_nvme_attach_controller", 00:36:47.118 "req_id": 1 00:36:47.118 } 00:36:47.118 Got JSON-RPC error response 00:36:47.118 response: 00:36:47.118 { 00:36:47.118 "code": -19, 00:36:47.118 "message": "No such device" 00:36:47.118 } 00:36:47.118 00:16:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:47.118 00:16:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:47.118 00:16:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:47.118 00:16:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:47.118 00:16:52 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:47.118 00:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:47.376 00:16:52 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6WzZC0JLg9 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.376 00:16:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6WzZC0JLg9 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6WzZC0JLg9 00:36:47.376 00:16:52 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.6WzZC0JLg9 00:36:47.376 00:16:52 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6WzZC0JLg9 00:36:47.376 00:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6WzZC0JLg9 00:36:47.634 00:16:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.634 00:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.891 nvme0n1 00:36:47.891 00:16:53 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:47.891 00:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.891 00:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.891 00:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.891 00:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.891 00:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.147 00:16:53 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:48.147 00:16:53 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:48.147 00:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:48.405 00:16:53 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:48.405 00:16:53 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:48.405 00:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.405 00:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.405 00:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.663 00:16:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:48.663 00:16:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:48.663 00:16:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.663 00:16:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.663 00:16:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.663 00:16:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.663 00:16:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.921 00:16:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:48.921 00:16:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:48.921 00:16:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:49.179 00:16:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:49.179 00:16:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.179 00:16:54 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:49.437 00:16:54 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:49.437 00:16:54 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6WzZC0JLg9 00:36:49.437 00:16:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6WzZC0JLg9 00:36:49.695 00:16:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yrJXMdPnsx 00:36:49.695 00:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yrJXMdPnsx 00:36:49.952 00:16:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.953 00:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.211 nvme0n1 00:36:50.211 00:16:55 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:50.211 00:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:50.469 00:16:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:50.469 "subsystems": [ 00:36:50.469 { 00:36:50.469 "subsystem": "keyring", 00:36:50.469 "config": [ 00:36:50.469 { 00:36:50.469 "method": "keyring_file_add_key", 00:36:50.469 "params": { 00:36:50.469 "name": "key0", 00:36:50.469 "path": "/tmp/tmp.6WzZC0JLg9" 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "keyring_file_add_key", 00:36:50.469 "params": { 00:36:50.469 "name": "key1", 00:36:50.469 "path": "/tmp/tmp.yrJXMdPnsx" 00:36:50.469 } 00:36:50.469 } 00:36:50.469 ] 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "subsystem": "iobuf", 00:36:50.469 "config": [ 00:36:50.469 { 00:36:50.469 "method": "iobuf_set_options", 00:36:50.469 "params": { 00:36:50.469 "small_pool_count": 8192, 00:36:50.469 "large_pool_count": 1024, 00:36:50.469 "small_bufsize": 8192, 00:36:50.469 "large_bufsize": 135168 00:36:50.469 } 00:36:50.469 } 00:36:50.469 ] 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "subsystem": "sock", 00:36:50.469 "config": [ 00:36:50.469 { 00:36:50.469 "method": "sock_set_default_impl", 00:36:50.469 "params": { 00:36:50.469 "impl_name": "posix" 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "sock_impl_set_options", 00:36:50.469 "params": { 00:36:50.469 "impl_name": "ssl", 00:36:50.469 "recv_buf_size": 4096, 00:36:50.469 "send_buf_size": 4096, 00:36:50.469 "enable_recv_pipe": true, 00:36:50.469 "enable_quickack": false, 00:36:50.469 "enable_placement_id": 0, 00:36:50.469 "enable_zerocopy_send_server": true, 00:36:50.469 "enable_zerocopy_send_client": false, 00:36:50.469 "zerocopy_threshold": 0, 00:36:50.469 "tls_version": 0, 00:36:50.469 "enable_ktls": false 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "sock_impl_set_options", 00:36:50.469 "params": { 00:36:50.469 "impl_name": "posix", 00:36:50.469 "recv_buf_size": 2097152, 00:36:50.469 "send_buf_size": 2097152, 00:36:50.469 "enable_recv_pipe": true, 00:36:50.469 "enable_quickack": false, 00:36:50.469 "enable_placement_id": 0, 00:36:50.469 "enable_zerocopy_send_server": true, 00:36:50.469 "enable_zerocopy_send_client": false, 00:36:50.469 "zerocopy_threshold": 0, 00:36:50.469 "tls_version": 0, 00:36:50.469 "enable_ktls": false 00:36:50.469 } 00:36:50.469 } 00:36:50.469 ] 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "subsystem": "vmd", 00:36:50.469 "config": [] 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "subsystem": "accel", 00:36:50.469 "config": [ 00:36:50.469 { 00:36:50.469 "method": "accel_set_options", 00:36:50.469 "params": { 00:36:50.469 "small_cache_size": 128, 00:36:50.469 "large_cache_size": 16, 00:36:50.469 "task_count": 2048, 00:36:50.469 "sequence_count": 2048, 00:36:50.469 "buf_count": 2048 00:36:50.469 } 00:36:50.469 } 00:36:50.469 ] 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "subsystem": "bdev", 00:36:50.469 "config": [ 00:36:50.469 { 00:36:50.469 "method": "bdev_set_options", 00:36:50.469 "params": { 00:36:50.469 "bdev_io_pool_size": 65535, 00:36:50.469 "bdev_io_cache_size": 256, 00:36:50.469 "bdev_auto_examine": true, 00:36:50.469 "iobuf_small_cache_size": 128, 00:36:50.469 "iobuf_large_cache_size": 16 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "bdev_raid_set_options", 00:36:50.469 "params": { 00:36:50.469 "process_window_size_kb": 1024 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "bdev_iscsi_set_options", 00:36:50.469 "params": { 00:36:50.469 "timeout_sec": 30 00:36:50.469 } 00:36:50.469 }, 00:36:50.469 { 00:36:50.469 "method": "bdev_nvme_set_options", 00:36:50.469 "params": { 00:36:50.469 "action_on_timeout": "none", 00:36:50.469 "timeout_us": 0, 00:36:50.469 "timeout_admin_us": 0, 00:36:50.469 "keep_alive_timeout_ms": 10000, 00:36:50.469 "arbitration_burst": 0, 00:36:50.469 "low_priority_weight": 0, 00:36:50.469 "medium_priority_weight": 0, 00:36:50.469 "high_priority_weight": 0, 00:36:50.469 "nvme_adminq_poll_period_us": 10000, 00:36:50.469 "nvme_ioq_poll_period_us": 0, 00:36:50.469 "io_queue_requests": 512, 00:36:50.469 "delay_cmd_submit": true, 00:36:50.469 "transport_retry_count": 4, 00:36:50.469 "bdev_retry_count": 3, 00:36:50.469 "transport_ack_timeout": 0, 00:36:50.469 "ctrlr_loss_timeout_sec": 0, 00:36:50.469 "reconnect_delay_sec": 0, 00:36:50.469 "fast_io_fail_timeout_sec": 0, 00:36:50.469 "disable_auto_failback": false, 00:36:50.469 "generate_uuids": false, 00:36:50.469 "transport_tos": 0, 00:36:50.469 "nvme_error_stat": false, 00:36:50.469 "rdma_srq_size": 0, 00:36:50.469 "io_path_stat": false, 00:36:50.469 "allow_accel_sequence": false, 00:36:50.469 "rdma_max_cq_size": 0, 00:36:50.469 "rdma_cm_event_timeout_ms": 0, 00:36:50.469 "dhchap_digests": [ 00:36:50.469 "sha256", 00:36:50.469 "sha384", 00:36:50.470 "sha512" 00:36:50.470 ], 00:36:50.470 "dhchap_dhgroups": [ 00:36:50.470 "null", 00:36:50.470 "ffdhe2048", 00:36:50.470 "ffdhe3072", 00:36:50.470 "ffdhe4096", 00:36:50.470 "ffdhe6144", 00:36:50.470 "ffdhe8192" 00:36:50.470 ] 00:36:50.470 } 00:36:50.470 }, 00:36:50.470 { 00:36:50.470 "method": "bdev_nvme_attach_controller", 00:36:50.470 "params": { 00:36:50.470 "name": "nvme0", 00:36:50.470 "trtype": "TCP", 00:36:50.470 "adrfam": "IPv4", 00:36:50.470 "traddr": "127.0.0.1", 00:36:50.470 "trsvcid": "4420", 00:36:50.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.470 "prchk_reftag": false, 00:36:50.470 "prchk_guard": false, 00:36:50.470 "ctrlr_loss_timeout_sec": 0, 00:36:50.470 "reconnect_delay_sec": 0, 00:36:50.470 "fast_io_fail_timeout_sec": 0, 00:36:50.470 "psk": "key0", 00:36:50.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.470 "hdgst": false, 00:36:50.470 "ddgst": false 00:36:50.470 } 00:36:50.470 }, 00:36:50.470 { 00:36:50.470 "method": "bdev_nvme_set_hotplug", 00:36:50.470 "params": { 00:36:50.470 "period_us": 100000, 00:36:50.470 "enable": false 00:36:50.470 } 00:36:50.470 }, 00:36:50.470 { 00:36:50.470 "method": "bdev_wait_for_examine" 00:36:50.470 } 00:36:50.470 ] 00:36:50.470 }, 00:36:50.470 { 00:36:50.470 "subsystem": "nbd", 00:36:50.470 "config": [] 00:36:50.470 } 00:36:50.470 ] 00:36:50.470 }' 00:36:50.470 00:16:56 keyring_file -- keyring/file.sh@114 -- # killprocess 970429 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 970429 ']' 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@950 -- # kill -0 970429 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 970429 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 970429' 00:36:50.470 killing process with pid 970429 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@965 -- # kill 970429 00:36:50.470 Received shutdown signal, test time was about 1.000000 seconds 00:36:50.470 00:36:50.470 Latency(us) 00:36:50.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.470 =================================================================================================================== 00:36:50.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:50.470 00:16:56 keyring_file -- common/autotest_common.sh@970 -- # wait 970429 00:36:50.728 00:16:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=971841 00:36:50.728 00:16:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 971841 /var/tmp/bperf.sock 00:36:50.728 00:16:56 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 971841 ']' 00:36:50.728 00:16:56 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.728 00:16:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:50.728 00:16:56 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:50.728 00:16:56 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.728 00:16:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:50.728 "subsystems": [ 00:36:50.728 { 00:36:50.728 "subsystem": "keyring", 00:36:50.728 "config": [ 00:36:50.728 { 00:36:50.728 "method": "keyring_file_add_key", 00:36:50.728 "params": { 00:36:50.728 "name": "key0", 00:36:50.728 "path": "/tmp/tmp.6WzZC0JLg9" 00:36:50.728 } 00:36:50.728 }, 00:36:50.728 { 00:36:50.728 "method": "keyring_file_add_key", 00:36:50.728 "params": { 00:36:50.728 "name": "key1", 00:36:50.728 "path": "/tmp/tmp.yrJXMdPnsx" 00:36:50.728 } 00:36:50.728 } 00:36:50.728 ] 00:36:50.728 }, 00:36:50.728 { 00:36:50.728 "subsystem": "iobuf", 00:36:50.728 "config": [ 00:36:50.728 { 00:36:50.728 "method": "iobuf_set_options", 00:36:50.728 "params": { 00:36:50.728 "small_pool_count": 8192, 00:36:50.728 "large_pool_count": 1024, 00:36:50.728 "small_bufsize": 8192, 00:36:50.728 "large_bufsize": 135168 00:36:50.728 } 00:36:50.728 } 00:36:50.728 ] 00:36:50.728 }, 00:36:50.728 { 00:36:50.728 "subsystem": "sock", 00:36:50.728 "config": [ 00:36:50.728 { 00:36:50.728 "method": "sock_set_default_impl", 00:36:50.728 "params": { 00:36:50.728 "impl_name": "posix" 00:36:50.728 } 00:36:50.728 }, 00:36:50.728 { 00:36:50.728 "method": "sock_impl_set_options", 00:36:50.728 "params": { 00:36:50.728 "impl_name": "ssl", 00:36:50.728 "recv_buf_size": 4096, 00:36:50.728 "send_buf_size": 4096, 00:36:50.728 "enable_recv_pipe": true, 00:36:50.728 "enable_quickack": false, 00:36:50.729 "enable_placement_id": 0, 00:36:50.729 "enable_zerocopy_send_server": true, 00:36:50.729 "enable_zerocopy_send_client": false, 00:36:50.729 "zerocopy_threshold": 0, 00:36:50.729 "tls_version": 0, 00:36:50.729 "enable_ktls": false 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "sock_impl_set_options", 00:36:50.729 "params": { 00:36:50.729 "impl_name": "posix", 00:36:50.729 "recv_buf_size": 2097152, 00:36:50.729 "send_buf_size": 2097152, 00:36:50.729 "enable_recv_pipe": true, 00:36:50.729 "enable_quickack": false, 00:36:50.729 "enable_placement_id": 0, 00:36:50.729 "enable_zerocopy_send_server": true, 00:36:50.729 "enable_zerocopy_send_client": false, 00:36:50.729 "zerocopy_threshold": 0, 00:36:50.729 "tls_version": 0, 00:36:50.729 "enable_ktls": false 00:36:50.729 } 00:36:50.729 } 00:36:50.729 ] 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "subsystem": "vmd", 00:36:50.729 "config": [] 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "subsystem": "accel", 00:36:50.729 "config": [ 00:36:50.729 { 00:36:50.729 "method": "accel_set_options", 00:36:50.729 "params": { 00:36:50.729 "small_cache_size": 128, 00:36:50.729 "large_cache_size": 16, 00:36:50.729 "task_count": 2048, 00:36:50.729 "sequence_count": 2048, 00:36:50.729 "buf_count": 2048 00:36:50.729 } 00:36:50.729 } 00:36:50.729 ] 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "subsystem": "bdev", 00:36:50.729 "config": [ 00:36:50.729 { 00:36:50.729 "method": "bdev_set_options", 00:36:50.729 "params": { 00:36:50.729 "bdev_io_pool_size": 65535, 00:36:50.729 "bdev_io_cache_size": 256, 00:36:50.729 "bdev_auto_examine": true, 00:36:50.729 "iobuf_small_cache_size": 128, 00:36:50.729 "iobuf_large_cache_size": 16 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_raid_set_options", 00:36:50.729 "params": { 00:36:50.729 "process_window_size_kb": 1024 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_iscsi_set_options", 00:36:50.729 "params": { 00:36:50.729 "timeout_sec": 30 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_nvme_set_options", 00:36:50.729 "params": { 00:36:50.729 "action_on_timeout": "none", 00:36:50.729 "timeout_us": 0, 00:36:50.729 "timeout_admin_us": 0, 00:36:50.729 "keep_alive_timeout_ms": 10000, 00:36:50.729 "arbitration_burst": 0, 00:36:50.729 "low_priority_weight": 0, 00:36:50.729 "medium_priority_weight": 0, 00:36:50.729 "high_priority_weight": 0, 00:36:50.729 "nvme_adminq_poll_period_us": 10000, 00:36:50.729 "nvme_ioq_poll_period_us": 0, 00:36:50.729 "io_queue_requests": 512, 00:36:50.729 "delay_cmd_submit": true, 00:36:50.729 "transport_retry_count": 4, 00:36:50.729 "bdev_retry_count": 3, 00:36:50.729 "transport_ack_timeout": 0, 00:36:50.729 "ctrlr_loss_timeout_sec": 0, 00:36:50.729 "reconnect_delay_sec": 0, 00:36:50.729 "fast_io_fail_timeout_sec": 0, 00:36:50.729 "disable_auto_failback": false, 00:36:50.729 "generate_uuids": false, 00:36:50.729 "transport_tos": 0, 00:36:50.729 "nvme_error_stat": false, 00:36:50.729 "rdma_srq_size": 0, 00:36:50.729 "io_path_stat": false, 00:36:50.729 "allow_accel_sequence": false, 00:36:50.729 00:16:56 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:50.729 "rdma_max_cq_size": 0, 00:36:50.729 "rdma_cm_event_timeout_ms": 0, 00:36:50.729 "dhchap_digests": [ 00:36:50.729 "sha256", 00:36:50.729 "sha384", 00:36:50.729 "sha512" 00:36:50.729 ], 00:36:50.729 "dhchap_dhgroups": [ 00:36:50.729 "null", 00:36:50.729 "ffdhe2048", 00:36:50.729 "ffdhe3072", 00:36:50.729 "ffdhe4096", 00:36:50.729 "ffdhe6144", 00:36:50.729 "ffdhe8192" 00:36:50.729 ] 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_nvme_attach_controller", 00:36:50.729 "params": { 00:36:50.729 "name": "nvme0", 00:36:50.729 "trtype": "TCP", 00:36:50.729 "adrfam": "IPv4", 00:36:50.729 "traddr": "127.0.0.1", 00:36:50.729 "trsvcid": "4420", 00:36:50.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.729 "prchk_reftag": false, 00:36:50.729 "prchk_guard": false, 00:36:50.729 "ctrlr_loss_timeout_sec": 0, 00:36:50.729 "reconnect_delay_sec": 0, 00:36:50.729 "fast_io_fail_timeout_sec": 0, 00:36:50.729 "psk": "key0", 00:36:50.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.729 "hdgst": false, 00:36:50.729 "ddgst": false 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_nvme_set_hotplug", 00:36:50.729 "params": { 00:36:50.729 "period_us": 100000, 00:36:50.729 "enable": false 00:36:50.729 } 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "method": "bdev_wait_for_examine" 00:36:50.729 } 00:36:50.729 ] 00:36:50.729 }, 00:36:50.729 { 00:36:50.729 "subsystem": "nbd", 00:36:50.729 "config": [] 00:36:50.729 } 00:36:50.729 ] 00:36:50.729 }' 00:36:50.729 00:16:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.729 [2024-07-25 00:16:56.395468] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:50.729 [2024-07-25 00:16:56.395552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971841 ] 00:36:50.729 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.987 [2024-07-25 00:16:56.456551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.987 [2024-07-25 00:16:56.544068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.245 [2024-07-25 00:16:56.727223] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.810 00:16:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.810 00:16:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.810 00:16:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:51.810 00:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.810 00:16:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:52.068 00:16:57 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:52.068 00:16:57 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:52.068 00:16:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.068 00:16:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.068 00:16:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.068 00:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.068 00:16:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.326 00:16:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:52.326 00:16:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:52.326 00:16:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.326 00:16:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.326 00:16:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.326 00:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.326 00:16:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.583 00:16:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:52.583 00:16:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:52.583 00:16:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:52.583 00:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:52.841 00:16:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:52.841 00:16:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:52.841 00:16:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6WzZC0JLg9 /tmp/tmp.yrJXMdPnsx 00:36:52.841 00:16:58 keyring_file -- keyring/file.sh@20 -- # killprocess 971841 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 971841 ']' 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 971841 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 971841 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 971841' 00:36:52.841 killing process with pid 971841 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@965 -- # kill 971841 00:36:52.841 Received shutdown signal, test time was about 1.000000 seconds 00:36:52.841 00:36:52.841 Latency(us) 00:36:52.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.841 =================================================================================================================== 00:36:52.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:52.841 00:16:58 keyring_file -- common/autotest_common.sh@970 -- # wait 971841 00:36:53.098 00:16:58 keyring_file -- keyring/file.sh@21 -- # killprocess 970421 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 970421 ']' 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 970421 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 970421 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 970421' 00:36:53.098 killing process with pid 970421 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@965 -- # kill 970421 00:36:53.098 [2024-07-25 00:16:58.619817] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:53.098 00:16:58 keyring_file -- common/autotest_common.sh@970 -- # wait 970421 00:36:53.356 00:36:53.356 real 0m14.088s 00:36:53.356 user 0m34.697s 00:36:53.356 sys 0m3.330s 00:36:53.356 00:16:59 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:53.356 00:16:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:53.356 ************************************ 00:36:53.356 END TEST keyring_file 00:36:53.356 ************************************ 00:36:53.356 00:16:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:53.356 00:16:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:53.356 00:16:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:53.356 00:16:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:53.356 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:36:53.356 ************************************ 00:36:53.356 START TEST keyring_linux 00:36:53.356 ************************************ 00:36:53.356 00:16:59 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:53.615 * Looking for test storage... 00:36:53.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.615 00:16:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.615 00:16:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.615 00:16:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.615 00:16:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.615 00:16:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.615 00:16:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.615 00:16:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:53.615 00:16:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:53.615 /tmp/:spdk-test:key0 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:53.615 00:16:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:53.615 00:16:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:53.615 /tmp/:spdk-test:key1 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=972247 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:53.615 00:16:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 972247 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 972247 ']' 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:53.615 00:16:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:53.615 [2024-07-25 00:16:59.264931] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:53.615 [2024-07-25 00:16:59.265018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972247 ] 00:36:53.615 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.615 [2024-07-25 00:16:59.322449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.874 [2024-07-25 00:16:59.413133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:54.133 [2024-07-25 00:16:59.674067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.133 null0 00:36:54.133 [2024-07-25 00:16:59.706129] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:54.133 [2024-07-25 00:16:59.706576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:54.133 862687951 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:54.133 913264140 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=972266 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 972266 /var/tmp/bperf.sock 00:36:54.133 00:16:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 972266 ']' 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:54.133 00:16:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:54.133 [2024-07-25 00:16:59.776041] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:54.133 [2024-07-25 00:16:59.776158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972266 ] 00:36:54.133 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.133 [2024-07-25 00:16:59.840201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.391 [2024-07-25 00:16:59.934573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.391 00:16:59 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.391 00:16:59 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:54.391 00:16:59 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:54.391 00:16:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:54.649 00:17:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:54.649 00:17:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:54.907 00:17:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:54.907 00:17:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:55.164 [2024-07-25 00:17:00.839851] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:55.422 nvme0n1 00:36:55.422 00:17:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:55.422 00:17:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:55.422 00:17:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:55.422 00:17:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:55.422 00:17:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.422 00:17:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:55.680 00:17:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:55.680 00:17:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:55.680 00:17:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:55.680 00:17:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:55.680 00:17:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.680 00:17:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.680 00:17:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@25 -- # sn=862687951 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 862687951 == \8\6\2\6\8\7\9\5\1 ]] 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 862687951 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:55.937 00:17:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.937 Running I/O for 1 seconds... 00:36:56.870 00:36:56.870 Latency(us) 00:36:56.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.870 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:56.870 nvme0n1 : 1.02 4145.55 16.19 0.00 0.00 30550.38 10874.12 43884.85 00:36:56.870 =================================================================================================================== 00:36:56.870 Total : 4145.55 16.19 0.00 0.00 30550.38 10874.12 43884.85 00:36:56.870 0 00:36:56.870 00:17:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:56.870 00:17:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:57.128 00:17:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:57.128 00:17:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:57.128 00:17:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:57.128 00:17:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:57.128 00:17:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:57.128 00:17:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.385 00:17:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:57.386 00:17:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:57.386 00:17:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:57.386 00:17:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.386 00:17:03 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.386 00:17:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.643 [2024-07-25 00:17:03.321324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:57.643 [2024-07-25 00:17:03.321852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1270 (107): Transport endpoint is not connected 00:36:57.643 [2024-07-25 00:17:03.322840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1270 (9): Bad file descriptor 00:36:57.643 [2024-07-25 00:17:03.323838] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:57.644 [2024-07-25 00:17:03.323861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:57.644 [2024-07-25 00:17:03.323877] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:57.644 request: 00:36:57.644 { 00:36:57.644 "name": "nvme0", 00:36:57.644 "trtype": "tcp", 00:36:57.644 "traddr": "127.0.0.1", 00:36:57.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.644 "adrfam": "ipv4", 00:36:57.644 "trsvcid": "4420", 00:36:57.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.644 "psk": ":spdk-test:key1", 00:36:57.644 "method": "bdev_nvme_attach_controller", 00:36:57.644 "req_id": 1 00:36:57.644 } 00:36:57.644 Got JSON-RPC error response 00:36:57.644 response: 00:36:57.644 { 00:36:57.644 "code": -5, 00:36:57.644 "message": "Input/output error" 00:36:57.644 } 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@33 -- # sn=862687951 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 862687951 00:36:57.644 1 links removed 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@33 -- # sn=913264140 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 913264140 00:36:57.644 1 links removed 00:36:57.644 00:17:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 972266 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 972266 ']' 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 972266 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.644 00:17:03 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 972266 00:36:57.901 00:17:03 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 972266' 00:36:57.902 killing process with pid 972266 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@965 -- # kill 972266 00:36:57.902 Received shutdown signal, test time was about 1.000000 seconds 00:36:57.902 00:36:57.902 Latency(us) 00:36:57.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.902 =================================================================================================================== 00:36:57.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@970 -- # wait 972266 00:36:57.902 00:17:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 972247 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 972247 ']' 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 972247 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 972247 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 972247' 00:36:57.902 killing process with pid 972247 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@965 -- # kill 972247 00:36:57.902 00:17:03 keyring_linux -- common/autotest_common.sh@970 -- # wait 972247 00:36:58.467 00:36:58.467 real 0m4.913s 00:36:58.467 user 0m9.244s 00:36:58.467 sys 0m1.512s 00:36:58.467 00:17:03 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:58.467 00:17:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.467 ************************************ 00:36:58.467 END TEST keyring_linux 00:36:58.467 ************************************ 00:36:58.467 00:17:04 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:58.467 00:17:04 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:58.467 00:17:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:58.467 00:17:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:58.467 00:17:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:58.467 00:17:04 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:58.467 00:17:04 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:58.467 00:17:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:58.467 00:17:04 -- common/autotest_common.sh@10 -- # set +x 00:36:58.467 00:17:04 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:58.467 00:17:04 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:58.467 00:17:04 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:58.467 00:17:04 -- common/autotest_common.sh@10 -- # set +x 00:37:00.439 INFO: APP EXITING 00:37:00.439 INFO: killing all VMs 00:37:00.439 INFO: killing vhost app 00:37:00.439 INFO: EXIT DONE 00:37:01.374 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:01.374 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:01.374 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:01.374 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:01.374 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:01.374 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:01.374 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:01.374 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:01.374 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:37:01.374 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:01.374 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:01.374 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:01.374 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:01.374 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:01.374 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:01.374 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:01.374 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:02.750 Cleaning 00:37:02.750 Removing: /var/run/dpdk/spdk0/config 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:02.750 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:02.750 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:02.750 Removing: /var/run/dpdk/spdk1/config 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:02.750 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:02.750 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:02.750 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:02.750 Removing: /var/run/dpdk/spdk2/config 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:02.750 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:02.750 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:02.750 Removing: /var/run/dpdk/spdk3/config 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:02.750 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:02.750 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:02.750 Removing: /var/run/dpdk/spdk4/config 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:02.750 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:02.750 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:02.750 Removing: /dev/shm/bdev_svc_trace.1 00:37:02.750 Removing: /dev/shm/nvmf_trace.0 00:37:02.750 Removing: /dev/shm/spdk_tgt_trace.pid653381 00:37:02.750 Removing: /var/run/dpdk/spdk0 00:37:02.750 Removing: /var/run/dpdk/spdk1 00:37:02.750 Removing: /var/run/dpdk/spdk2 00:37:02.750 Removing: /var/run/dpdk/spdk3 00:37:02.750 Removing: /var/run/dpdk/spdk4 00:37:02.750 Removing: /var/run/dpdk/spdk_pid651839 00:37:02.750 Removing: /var/run/dpdk/spdk_pid652568 00:37:02.750 Removing: /var/run/dpdk/spdk_pid653381 00:37:02.750 Removing: /var/run/dpdk/spdk_pid653818 00:37:02.750 Removing: /var/run/dpdk/spdk_pid654513 00:37:03.009 Removing: /var/run/dpdk/spdk_pid654655 00:37:03.009 Removing: /var/run/dpdk/spdk_pid655364 00:37:03.009 Removing: /var/run/dpdk/spdk_pid655379 00:37:03.009 Removing: /var/run/dpdk/spdk_pid655621 00:37:03.009 Removing: /var/run/dpdk/spdk_pid656812 00:37:03.009 Removing: /var/run/dpdk/spdk_pid657718 00:37:03.009 Removing: /var/run/dpdk/spdk_pid657954 00:37:03.009 Removing: /var/run/dpdk/spdk_pid658212 00:37:03.009 Removing: /var/run/dpdk/spdk_pid658420 00:37:03.009 Removing: /var/run/dpdk/spdk_pid658608 00:37:03.009 Removing: /var/run/dpdk/spdk_pid658763 00:37:03.009 Removing: /var/run/dpdk/spdk_pid658925 00:37:03.009 Removing: /var/run/dpdk/spdk_pid659111 00:37:03.009 Removing: /var/run/dpdk/spdk_pid659675 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662044 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662206 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662376 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662382 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662770 00:37:03.009 Removing: /var/run/dpdk/spdk_pid662816 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663124 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663249 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663418 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663508 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663713 00:37:03.009 Removing: /var/run/dpdk/spdk_pid663723 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664092 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664245 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664557 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664722 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664760 00:37:03.009 Removing: /var/run/dpdk/spdk_pid664849 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665098 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665254 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665413 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665681 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665841 00:37:03.009 Removing: /var/run/dpdk/spdk_pid665999 00:37:03.009 Removing: /var/run/dpdk/spdk_pid666161 00:37:03.009 Removing: /var/run/dpdk/spdk_pid666429 00:37:03.009 Removing: /var/run/dpdk/spdk_pid666587 00:37:03.009 Removing: /var/run/dpdk/spdk_pid666744 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667013 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667179 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667335 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667486 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667762 00:37:03.009 Removing: /var/run/dpdk/spdk_pid667921 00:37:03.009 Removing: /var/run/dpdk/spdk_pid668083 00:37:03.009 Removing: /var/run/dpdk/spdk_pid668345 00:37:03.009 Removing: /var/run/dpdk/spdk_pid668519 00:37:03.009 Removing: /var/run/dpdk/spdk_pid668673 00:37:03.009 Removing: /var/run/dpdk/spdk_pid668855 00:37:03.009 Removing: /var/run/dpdk/spdk_pid669063 00:37:03.009 Removing: /var/run/dpdk/spdk_pid671123 00:37:03.009 Removing: /var/run/dpdk/spdk_pid724650 00:37:03.009 Removing: /var/run/dpdk/spdk_pid727144 00:37:03.009 Removing: /var/run/dpdk/spdk_pid733961 00:37:03.009 Removing: /var/run/dpdk/spdk_pid737712 00:37:03.009 Removing: /var/run/dpdk/spdk_pid740207 00:37:03.009 Removing: /var/run/dpdk/spdk_pid740614 00:37:03.009 Removing: /var/run/dpdk/spdk_pid747844 00:37:03.009 Removing: /var/run/dpdk/spdk_pid747853 00:37:03.009 Removing: /var/run/dpdk/spdk_pid748384 00:37:03.009 Removing: /var/run/dpdk/spdk_pid749040 00:37:03.009 Removing: /var/run/dpdk/spdk_pid749696 00:37:03.009 Removing: /var/run/dpdk/spdk_pid750092 00:37:03.009 Removing: /var/run/dpdk/spdk_pid750102 00:37:03.009 Removing: /var/run/dpdk/spdk_pid750244 00:37:03.009 Removing: /var/run/dpdk/spdk_pid750377 00:37:03.009 Removing: /var/run/dpdk/spdk_pid750379 00:37:03.009 Removing: /var/run/dpdk/spdk_pid751035 00:37:03.009 Removing: /var/run/dpdk/spdk_pid751660 00:37:03.009 Removing: /var/run/dpdk/spdk_pid752232 00:37:03.010 Removing: /var/run/dpdk/spdk_pid752628 00:37:03.010 Removing: /var/run/dpdk/spdk_pid752691 00:37:03.010 Removing: /var/run/dpdk/spdk_pid752891 00:37:03.010 Removing: /var/run/dpdk/spdk_pid753767 00:37:03.010 Removing: /var/run/dpdk/spdk_pid754491 00:37:03.010 Removing: /var/run/dpdk/spdk_pid759864 00:37:03.010 Removing: /var/run/dpdk/spdk_pid760030 00:37:03.010 Removing: /var/run/dpdk/spdk_pid762533 00:37:03.010 Removing: /var/run/dpdk/spdk_pid766218 00:37:03.010 Removing: /var/run/dpdk/spdk_pid768882 00:37:03.010 Removing: /var/run/dpdk/spdk_pid775249 00:37:03.010 Removing: /var/run/dpdk/spdk_pid780437 00:37:03.010 Removing: /var/run/dpdk/spdk_pid781631 00:37:03.010 Removing: /var/run/dpdk/spdk_pid782295 00:37:03.010 Removing: /var/run/dpdk/spdk_pid792479 00:37:03.010 Removing: /var/run/dpdk/spdk_pid794680 00:37:03.010 Removing: /var/run/dpdk/spdk_pid819968 00:37:03.010 Removing: /var/run/dpdk/spdk_pid822741 00:37:03.010 Removing: /var/run/dpdk/spdk_pid823844 00:37:03.010 Removing: /var/run/dpdk/spdk_pid825116 00:37:03.010 Removing: /var/run/dpdk/spdk_pid825250 00:37:03.010 Removing: /var/run/dpdk/spdk_pid825387 00:37:03.010 Removing: /var/run/dpdk/spdk_pid825407 00:37:03.010 Removing: /var/run/dpdk/spdk_pid825832 00:37:03.010 Removing: /var/run/dpdk/spdk_pid827265 00:37:03.010 Removing: /var/run/dpdk/spdk_pid828367 00:37:03.010 Removing: /var/run/dpdk/spdk_pid828765 00:37:03.010 Removing: /var/run/dpdk/spdk_pid830296 00:37:03.010 Removing: /var/run/dpdk/spdk_pid830710 00:37:03.010 Removing: /var/run/dpdk/spdk_pid831268 00:37:03.010 Removing: /var/run/dpdk/spdk_pid833653 00:37:03.010 Removing: /var/run/dpdk/spdk_pid836909 00:37:03.010 Removing: /var/run/dpdk/spdk_pid840440 00:37:03.010 Removing: /var/run/dpdk/spdk_pid863804 00:37:03.010 Removing: /var/run/dpdk/spdk_pid866562 00:37:03.010 Removing: /var/run/dpdk/spdk_pid870197 00:37:03.010 Removing: /var/run/dpdk/spdk_pid871259 00:37:03.010 Removing: /var/run/dpdk/spdk_pid872250 00:37:03.010 Removing: /var/run/dpdk/spdk_pid874899 00:37:03.268 Removing: /var/run/dpdk/spdk_pid877130 00:37:03.268 Removing: /var/run/dpdk/spdk_pid881259 00:37:03.268 Removing: /var/run/dpdk/spdk_pid881333 00:37:03.268 Removing: /var/run/dpdk/spdk_pid884099 00:37:03.268 Removing: /var/run/dpdk/spdk_pid884235 00:37:03.268 Removing: /var/run/dpdk/spdk_pid884369 00:37:03.268 Removing: /var/run/dpdk/spdk_pid884631 00:37:03.268 Removing: /var/run/dpdk/spdk_pid884636 00:37:03.268 Removing: /var/run/dpdk/spdk_pid885708 00:37:03.268 Removing: /var/run/dpdk/spdk_pid887008 00:37:03.268 Removing: /var/run/dpdk/spdk_pid888275 00:37:03.268 Removing: /var/run/dpdk/spdk_pid889977 00:37:03.268 Removing: /var/run/dpdk/spdk_pid891151 00:37:03.268 Removing: /var/run/dpdk/spdk_pid892333 00:37:03.268 Removing: /var/run/dpdk/spdk_pid896125 00:37:03.268 Removing: /var/run/dpdk/spdk_pid896464 00:37:03.268 Removing: /var/run/dpdk/spdk_pid897743 00:37:03.268 Removing: /var/run/dpdk/spdk_pid898482 00:37:03.268 Removing: /var/run/dpdk/spdk_pid902068 00:37:03.268 Removing: /var/run/dpdk/spdk_pid904030 00:37:03.268 Removing: /var/run/dpdk/spdk_pid907392 00:37:03.268 Removing: /var/run/dpdk/spdk_pid910784 00:37:03.268 Removing: /var/run/dpdk/spdk_pid917016 00:37:03.268 Removing: /var/run/dpdk/spdk_pid922075 00:37:03.268 Removing: /var/run/dpdk/spdk_pid922078 00:37:03.268 Removing: /var/run/dpdk/spdk_pid934253 00:37:03.268 Removing: /var/run/dpdk/spdk_pid934661 00:37:03.268 Removing: /var/run/dpdk/spdk_pid935079 00:37:03.268 Removing: /var/run/dpdk/spdk_pid935591 00:37:03.268 Removing: /var/run/dpdk/spdk_pid936164 00:37:03.268 Removing: /var/run/dpdk/spdk_pid936578 00:37:03.268 Removing: /var/run/dpdk/spdk_pid936982 00:37:03.268 Removing: /var/run/dpdk/spdk_pid937392 00:37:03.268 Removing: /var/run/dpdk/spdk_pid939882 00:37:03.268 Removing: /var/run/dpdk/spdk_pid940026 00:37:03.268 Removing: /var/run/dpdk/spdk_pid943811 00:37:03.268 Removing: /var/run/dpdk/spdk_pid943880 00:37:03.268 Removing: /var/run/dpdk/spdk_pid945591 00:37:03.268 Removing: /var/run/dpdk/spdk_pid950493 00:37:03.268 Removing: /var/run/dpdk/spdk_pid950498 00:37:03.268 Removing: /var/run/dpdk/spdk_pid953454 00:37:03.268 Removing: /var/run/dpdk/spdk_pid955310 00:37:03.268 Removing: /var/run/dpdk/spdk_pid956736 00:37:03.268 Removing: /var/run/dpdk/spdk_pid957540 00:37:03.268 Removing: /var/run/dpdk/spdk_pid958942 00:37:03.268 Removing: /var/run/dpdk/spdk_pid959701 00:37:03.268 Removing: /var/run/dpdk/spdk_pid964976 00:37:03.268 Removing: /var/run/dpdk/spdk_pid965348 00:37:03.268 Removing: /var/run/dpdk/spdk_pid965756 00:37:03.268 Removing: /var/run/dpdk/spdk_pid967306 00:37:03.268 Removing: /var/run/dpdk/spdk_pid967586 00:37:03.268 Removing: /var/run/dpdk/spdk_pid967986 00:37:03.268 Removing: /var/run/dpdk/spdk_pid970421 00:37:03.268 Removing: /var/run/dpdk/spdk_pid970429 00:37:03.268 Removing: /var/run/dpdk/spdk_pid971841 00:37:03.268 Removing: /var/run/dpdk/spdk_pid972247 00:37:03.268 Removing: /var/run/dpdk/spdk_pid972266 00:37:03.268 Clean 00:37:03.268 00:17:08 -- common/autotest_common.sh@1447 -- # return 0 00:37:03.268 00:17:08 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:03.268 00:17:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.268 00:17:08 -- common/autotest_common.sh@10 -- # set +x 00:37:03.268 00:17:08 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:03.268 00:17:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.268 00:17:08 -- common/autotest_common.sh@10 -- # set +x 00:37:03.268 00:17:08 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:03.268 00:17:08 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:03.268 00:17:08 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:03.268 00:17:08 -- spdk/autotest.sh@391 -- # hash lcov 00:37:03.268 00:17:08 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:03.268 00:17:08 -- spdk/autotest.sh@393 -- # hostname 00:37:03.268 00:17:08 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:03.526 geninfo: WARNING: invalid characters removed from testname! 00:37:35.600 00:17:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.600 00:17:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:38.137 00:17:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.427 00:17:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.965 00:17:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.259 00:17:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.794 00:17:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:49.794 00:17:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.794 00:17:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:49.794 00:17:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.794 00:17:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.794 00:17:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.794 00:17:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.794 00:17:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.794 00:17:55 -- paths/export.sh@5 -- $ export PATH 00:37:49.794 00:17:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.794 00:17:55 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:49.794 00:17:55 -- common/autobuild_common.sh@440 -- $ date +%s 00:37:49.794 00:17:55 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721859475.XXXXXX 00:37:49.794 00:17:55 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721859475.Te3yNQ 00:37:49.794 00:17:55 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:37:49.794 00:17:55 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:37:49.794 00:17:55 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:49.794 00:17:55 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:49.794 00:17:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:49.794 00:17:55 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:49.795 00:17:55 -- common/autobuild_common.sh@456 -- $ get_config_params 00:37:49.795 00:17:55 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:49.795 00:17:55 -- common/autotest_common.sh@10 -- $ set +x 00:37:49.795 00:17:55 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:49.795 00:17:55 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:37:49.795 00:17:55 -- pm/common@17 -- $ local monitor 00:37:49.795 00:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.795 00:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.795 00:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.795 00:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.795 00:17:55 -- pm/common@21 -- $ date +%s 00:37:49.795 00:17:55 -- pm/common@21 -- $ date +%s 00:37:49.795 00:17:55 -- pm/common@25 -- $ sleep 1 00:37:49.795 00:17:55 -- pm/common@21 -- $ date +%s 00:37:49.795 00:17:55 -- pm/common@21 -- $ date +%s 00:37:49.795 00:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721859475 00:37:49.795 00:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721859475 00:37:49.795 00:17:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721859475 00:37:49.795 00:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721859475 00:37:49.795 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721859475_collect-vmstat.pm.log 00:37:49.795 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721859475_collect-cpu-load.pm.log 00:37:49.795 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721859475_collect-cpu-temp.pm.log 00:37:49.795 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721859475_collect-bmc-pm.bmc.pm.log 00:37:50.783 00:17:56 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:37:50.783 00:17:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:50.783 00:17:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:50.783 00:17:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:50.783 00:17:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:50.783 00:17:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:50.783 00:17:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:50.783 00:17:56 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:50.783 00:17:56 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:50.783 00:17:56 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:50.783 00:17:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:50.783 00:17:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:50.783 00:17:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:50.783 00:17:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:50.783 00:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.783 00:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:50.783 00:17:56 -- pm/common@44 -- $ pid=983593 00:37:50.783 00:17:56 -- pm/common@50 -- $ kill -TERM 983593 00:37:50.783 00:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.783 00:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:50.783 00:17:56 -- pm/common@44 -- $ pid=983595 00:37:50.783 00:17:56 -- pm/common@50 -- $ kill -TERM 983595 00:37:50.783 00:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.783 00:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:50.783 00:17:56 -- pm/common@44 -- $ pid=983597 00:37:50.783 00:17:56 -- pm/common@50 -- $ kill -TERM 983597 00:37:50.783 00:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.783 00:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:50.783 00:17:56 -- pm/common@44 -- $ pid=983624 00:37:50.783 00:17:56 -- pm/common@50 -- $ sudo -E kill -TERM 983624 00:37:50.783 + [[ -n 547903 ]] 00:37:50.783 + sudo kill 547903 00:37:50.792 [Pipeline] } 00:37:50.811 [Pipeline] // stage 00:37:50.817 [Pipeline] } 00:37:50.835 [Pipeline] // timeout 00:37:50.841 [Pipeline] } 00:37:50.858 [Pipeline] // catchError 00:37:50.864 [Pipeline] } 00:37:50.890 [Pipeline] // wrap 00:37:50.896 [Pipeline] } 00:37:50.912 [Pipeline] // catchError 00:37:50.922 [Pipeline] stage 00:37:50.925 [Pipeline] { (Epilogue) 00:37:50.940 [Pipeline] catchError 00:37:50.942 [Pipeline] { 00:37:50.957 [Pipeline] echo 00:37:50.959 Cleanup processes 00:37:50.965 [Pipeline] sh 00:37:51.249 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.249 983724 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:51.249 983859 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.263 [Pipeline] sh 00:37:51.548 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.548 ++ awk '{print $1}' 00:37:51.548 ++ grep -v 'sudo pgrep' 00:37:51.548 + sudo kill -9 983724 00:37:51.560 [Pipeline] sh 00:37:51.842 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:04.083 [Pipeline] sh 00:38:04.368 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:04.368 Artifacts sizes are good 00:38:04.384 [Pipeline] archiveArtifacts 00:38:04.391 Archiving artifacts 00:38:04.630 [Pipeline] sh 00:38:04.913 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:04.928 [Pipeline] cleanWs 00:38:04.938 [WS-CLEANUP] Deleting project workspace... 00:38:04.938 [WS-CLEANUP] Deferred wipeout is used... 00:38:04.946 [WS-CLEANUP] done 00:38:04.947 [Pipeline] } 00:38:04.969 [Pipeline] // catchError 00:38:04.982 [Pipeline] sh 00:38:05.259 + logger -p user.info -t JENKINS-CI 00:38:05.267 [Pipeline] } 00:38:05.283 [Pipeline] // stage 00:38:05.288 [Pipeline] } 00:38:05.305 [Pipeline] // node 00:38:05.311 [Pipeline] End of Pipeline 00:38:05.348 Finished: SUCCESS